Skip to content

nodef/gve.cxx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A high-performance parallel Graph interface supporting efficient Dynamic batch updates.

Research in graph-structured data has grown rapidly due to graphs' ability to represent complex real-world information and capture intricate relationships, particularly as many real-world graphs evolve dynamically through edge/vertex insertions and deletions. This has spurred interest in programming frameworks for managing, maintaining, and processing such dynamic graphs. In our report, we evaluate the performance of PetGraph (Rust), Stanford Network Analysis Platform (SNAP), SuiteSparse:GraphBLAS, cuGraph, Aspen, and our custom implementation in tasks including loading graphs from disk to memory, cloning loaded graphs, applying in-place edge deletions/insertions, and performing a simple iterative graph traversal algorithm. Our implementation demonstrates significant performance improvements: it outperforms PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, and Aspen by factors of 177x, 106x, 76x, 17x, and 3.3x in graph loading; 20x, 235x, 0.24x, 1.3x, and 0x in graph cloning; 141x/45x, 44x/25x, 13x/11x, 28x/34x, and 3.5x/2.2x in edge deletions/insertions; and 67x/63x, 86x/86x, 2.5x/2.6x, 0.25x/0.24x, and 1.3x/1.3x in traversal on updated graphs with deletions/insertions.

Below, we plot the runtime (in seconds, logarithmic scale) for loading a graph from file into memory with PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph for each graph in the dataset.

Image

Next, we plot the runtime (in milliseconds, logarithmic scale) of deleting a batch of 10^−7|𝐸| to 0.1|𝐸| randomly generated edges into a graph, in-place, in multiples of 10. Here, we evaluate PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph on each graph in the dataset. The left subfigure presents overall runtimes using the geometric mean for consistent scaling, while the right subfigure shows runtimes for individual graphs.

Image

Below, we plot the runtime of inserting a batch of edges into a graph, in-place, using PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph.

Image

Finally, we plot the runtime of traversing a graph using a simple iterative algorithm (42-step reverse walks from each vertex in a graph) on graphs with edge deletions. We evaluate PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph on each graph in the dataset.

Image

Refer to our technical report for more details:
Performance Comparison of Graph Representations Which Support Dynamic Graph Updates.



Installation

Run:

$ npm i gve.cxx

And then include gve.hxx as follows:

// main.c
#include "node_modules/gve.cxx/gve.hxx"

int main() { /* ... */ }

And then compile with clang or gcc as usual.

$ clang -std=c++17 -target -fopenmp x86_64-pc-windows-msvc main.cxx  # or, use gcc

You may also use a simpler approach:

// main.c
#include <gve.hxx>

int main() { /* ... */ }

If you add the path to node_modules/gve.cxx to your compiler's include paths.

$ clang -I./node_modules/gve.cxx -std=c++17 -fopenmp -target x86_64-pc-windows-msvc main.cxx

Example

#include <iostream>
#include <gve.hxx>

using namespace std;


int main() {
  // Create a directed graph with 5 vertices
  gve::DiGraph<int> graph;

  // Add edges to the graph
  graph.addEdge(0, 1);
  graph.addEdge(1, 2);
  graph.addEdge(2, 3);
  graph.addEdge(3, 4);
  graph.addEdge(4, 0);

  // Update the graph.
  gve::updateU(graph);

  // Print the number of vertices and edges
  cout << "Number of vertices: " << graph.order() << endl;
  cout << "Number of edges: " << graph.size() << endl;

  return 0;
}

API Overview

Core Classes

Graphs

  • ArenaDiGraph<K, V, E> Directed graph implementation using arena allocation. Optimized for high-performance scenarios with static or append-only workloads.

    • K: Key type (vertex ID), default uint32_t.
    • V: Vertex value type, default None.
    • E: Edge value type (weight), default None.
  • DiGraph<K, V, E> Standard directed graph implementation using std::vector. more flexible than ArenaDiGraph but potentially higher memory overhead.

  • DiGraphCsr<K, V, E, O> Read-only directed graph using Compressed Sparse Row (CSR) format. Best for static graph analysis.

    • O: Offset type, default size_t.

Graph Operations

Construction & Modification

  • addVertex(G& a, K u, V d): Add a vertex u with data d to graph a.
  • addEdgeU(G& a, K u, K v, E w): Add an edge (u, v) with weight w.
  • removeEdgeU(G& a, K u, K v): Remove edge (u, v).
  • updateU(G& a): Commit changes to the graph structure (required after batch modifications).
  • duplicate(const G& x): Create and return a deep copy of graph x.
  • transpose(const G& x): Return the transpose (reversed edges) of graph x.
  • symmetrizeU(G& a): specific method to make graph symmetric (undirected) in-place.

Partitioning

  • partitionById(const G& x, int p, int P): Partition graph vertices based on ID ranges.
  • partitionByBfs(const G& x, int p, int P, FC fc, FT ft): Partition using BFS traversal and a cost function.

Batch Updates

  • generateEdgeDeletions / generateEdgeInsertions: Generate random sets of edge updates for testing.
  • tidyBatchUpdateU: clean, sort, and deduplicate a batch of edge updates.
  • applyBatchUpdateU: Apply a batch of edge insertions and deletions to a graph.

Input / Output

  • readMtxFormatToGraphW: Read a graph from a Matrix Market field.
  • write(ostream& out, const G& graph, bool detailed): specific method to write graph structure to an output stream.
  • readEdgelistFormat*: Family of functions to read edge list formats.

Algorithms

Note: Most algorithms have a parallel version with the Omp suffix (e.g., pagerankStaticOmp) if OpenMP is enabled.

Traversal

  • bfsVisitedForEach(const G& x, K u, FT ft, FP fp): Perform BFS starting at u.
  • dfsVisitedForEach(const G& x, K u, FT ft, FP fp): Perform DFS starting at u.

PageRank

  • PagerankOptions<V>: Configuration options (damping factor, tolerance, max iterations).
  • PagerankResult<V>: Result structure containing ranks and timing statistics.
  • pagerankStatic(const G& xt, const PagerankOptions& o): Compute PageRank on a static graph.
  • pagerankNaiveDynamic: Update PageRank using naive dynamic approach.
  • pagerankDynamicTraversal: Update PageRank using dynamic traversal.
  • pagerankDynamicFrontier: Update PageRank using dynamic frontier approach.

Community Detection

  • LouvainOptions<V> / LeidenOptions<V>: Configuration options.
  • louvainStatic(const G& x, const LouvainOptions& o): Detect communities using the Louvain method.
  • leidenStatic(const G& x, const LeidenOptions& o): Detect communities using the Leiden method.

Utilities

  • timeNow(): Get current time point.
  • duration(start, end): Calculate duration in milliseconds.
  • retry(Func f, int retries): Helper to retry a function on failure.
  • FormatError: Exception thrown for data format errors.

CUDA Utilities

(Available when compiled with CUDA)

  • copyValuesCuW: Copy memory between host and device.
  • fillValueCuW: Fill device memory.
  • sumValues*: Reduction operations.
  • liNorm*: Norm calculations.


References



ORG

About

A high-performance parallel Graph interface supporting efficient Dynamic batch updates.

Resources

License

Stars

Watchers

Forks

Languages