![]() ![]() Based on the generated stores, we derive an effective and lightweight encoder to not only embed the main features of workloads and stores into the model, but also guarantee the high-efficiency of PreKar. To address this problem, we first develop a novel candidate stores generator, which not only discovers all possible candidate stores for model training, but also multiplies the umber of training instances. ![]() However, it is challenging to learn a well-trained model due to the low-diversity of historical workloads and the requirement of lightweight embedding strategies. ![]() To fill this gap, we propose a learned performance predictor PreKar to estimate the time costs of processing the given workloads on the candidate stores. However, none of existing studies of performance prediction focuses on storage structures. ![]() Due to the lack of performance evaluation for knowledge graph stores, it is difficult for users to decide which one is the best. Effective knowledge graph storage management is identified as the basic premise to make full use of knowledge graphs. ![]()
0 Comments
Leave a Reply. |