Walkthrough With Data

Prerequisites

  1. How To Get DGA
  2. How To Build DGA
  3. How To Deploy DGA

Let's Get Started

In this example, we will be running Leaf Compression with DGA.

First, let's get some sample data. Already have data you want to use? That's great! Make sure it follows this format and it will work with DGA.


    $ wget http://sotera.github.io/distributed-graph-analytics/data/example.csv

If everything checks out! We can now copy our data set to a directory in hdfs. For this example we will create a directory in tmp for the input. You don't need to use this directory all the time.


    $ hadoop fs -mkdir -p /tmp/dga/lc/input/

No need to create the output directory. That will be done for us when our job is complete.

Now let's copy our data onto hdfs.


    $ hadoop fs -copyFromLocal example.csv /tmp/dga/lc/input/

Finally, we can now run our analytic! The command below uses the built in DGARunner to run Leaf Compression.


    $ cd /opt/dga/
    $ ./bin/dga-giraph lc /tmp/dga/lc/input/ /tmp/dga/lc/output/ -w 1 -ca io.edge.reverse.duplicator=true

The command above, runs the dga-giraph-0.0.1.jar and executes the DGARunner class. It passes in 5 command line arguments.

Is it done yet? If so, lets see the results!


    $ cd
    $ mkdir results/
    $ cd results
    $ hadoop fs -copyToLocal /tmp/dga/lc/output/* .

What are all these parts? Don't worry, let's make them one! Note: You might need to open up a subdirectory to see the parts. Use the cd command to navigate.


    $ cat part-* >> bigfile.txt
    $ vi bigfile.txt

And there you have it! You ran your first analytic with DGA!