Sort:
Open Access Issue
Efficient Knowledge Graph Embedding Training Framework with Multiple GPUs
Tsinghua Science and Technology 2023, 28 (1): 167-175
Published: 21 July 2022
Downloads:36

When training a large-scale knowledge graph embedding (KGE) model with multiple graphics processing units (GPUs), the partition-based method is necessary for parallel training. However, existing partition-based training methods suffer from low GPU utilization and high input/output (IO) overhead between the memory and disk. For a high IO overhead between the disk and memory problem, we optimized the twice partitioning with fine-grained GPU scheduling to reduce the IO overhead between the CPU memory and disk. For low GPU utilization caused by the GPU load imbalance problem, we proposed balanced partitioning and dynamic scheduling methods to accelerate the training speed in different cases. With the above methods, we proposed fine-grained partitioning KGE, an efficient KGE training framework with multiple GPUs. We conducted experiments on some benchmarks of the knowledge graph, and the results show that our method achieves speedup compared to existing framework on the training of KGE.

total 1