We're talking new releases & fast AI at Redis Released. Join us in your city.

Register now

Connecting Spark and Redis: A Detailed Look

February 02, 2016

The spark-redis package on github is our1 first step in the Spark-Redis journey. Spark has captured the public imagination around the real-time possibilities of big data and we1 hope to contribute in making this possibility a reality.

The spark-redis package is a Redis connector for Apache Spark that provides read and write access to all of Redis’ core data structures (RcDS) as RDDs (Resilient Distributed Datasets, not to be confused with RDBs).

I thought it would be fun to take the new connector for a test run and show off some of its capabilities. The following is my journey’s log. First things first…

Setting up

There are a few prerequisites you need before you can actually use spark-redis, namely: Apache Spark, Scala, Jedis and Redis. While the package specifically states version requirements for each piece, I actually used later versions with no discernible ill effects (v1.5.2, v2.11.7, v2.8 and unstable respectively).

I fumbled quite a lot trying to get it all working. Just after I finished putting everything in place, friend and fellow Redis-eur Tim Spann @PaaSDev published a step-by-step on “Setting up a Standalone Apache Spark Cluster” over at @DZone. That should get you through the hairiest parts if you’re on Ubuntu like me.

Once you’ve fulfilled all the requirements, you can just git clone https://github.com/RedisLabs/spark-redis, build it by running sbt (oh, yeah, install that one too) or just use the package from spark-packages.org () and you should be all ready to go… but go where exactly?

Mission statement

With this being an educational exercise, I needed a problem that could be solved with an advanced Directed Acyclic Graph engine and the fastest NoSQL Data Structure Store. After extensive research, I managed to identify perhaps the biggest challenge of contemporary data science – the counting of words. Since the word count challenge is the de-facto “Hello, World!” equivalent in Spark core, I elected to use it as my basis for tentative exploration and see how it could be adapted for use with Redis.

Reading the data

The first thing you need when counting words is, of course, words. Being the single-minded individual I am, I decided on counting the words in Redis’ source code files (this commit specifically), also hoping to reveal some interesting data-sciency facts in the process. With everything ready, I started by jumping into spark-shell:

The blinking cursor meant that the world was ready for my first line of Scala, and so I typed in:

That’s awesome! I barely got started and already data science proved to be useful: there are exactly 100 Redis source files! Of course, doing

Encouraged by my success, I rushed forward intent on getting the contents of the files transformed to words (that could later be counted). Unlike the usual examples that use the TextFileRDD, the WholeTextFilesRDD consists of file URLs and their contents, so it turned out that the following snippet did the work needed for splitting and cleaning the data (the call to the cache() method is strictly optional, but I try to follow best practices and expected to use that RDD later on again).

A note about variable names: I like them meaningful and short, so naturally wtf means WholeTextFiles, fwds is FileWords and so forth.

Once the fwds RDD had clean filenames and all the words were neatly split, I was off for some serious counting. First, I recreated the ubiquitous word counting example:

Pasting the above into the spark-shell and following with take confirmed success:

A note about the results: take isn’t supposed to be deterministic, but given that “requirepass” keeps surfacing these days, it may well be fatalistic. Also, 12657 must have some meaning but I’ve yet to find it.

Writing RDDs to Redis

Now comes the really fun stuff, a.k.a Redis. I wanted to make sure that the results were stored somewhere safe (like my non-persisted, unbound, password-less Redis server ;)) so I could use them in later computations. Redis’ Sorted Sets are a perfect match for the word-count pairs and would also allow me to query the data like I’m used to. It took only one line of Scala code to do that (actually three lines, but the first two don’t count):

With the data where I was comfortable examining it, I fired up the cli and did a few quick reads:

Nice! What else can I keep in Redis? Why, everything of course. The filenames themselves are perfect candidates, so I made another RDD and stored it in a regular Set:

Despite being very useful for science purposes, the contents fnames Set is pretty mundane and I wanted something more… so how about storing the word count for each file in its very own Sorted Set? With a few transformations/actions/RDDs I was able to do just that:

Back to redis-cli:

Reding RDDs from Redis

I could have danced (stored word-count data) all night, but if you only write and never read you’d be better off using a spark-/dev/null connector. So to make practical use of the data in Redis, I ran the following code that took the per-file word counts and reduced them to basically the same output of the classic WC challenge:

Then back to spark-shell to test this code and get a grand total of all words:

To wrap things up, I couldn’t just take Spark’s result for granted, so I double checked using a Lua script:

Closing notes

Back in the days when data was small, you could get away with counting words using a simple wc -w. As data grows, we find new ways to abstract solutions and in return gain flexibility and scalability. While I’m no data scientist (yet), Spark is an exciting tool to have – I’ve always liked DAGs – and its core is extremely useful. And that’s even without going into its integration with the Hadoop ecosystem and extensions for SQL, streaming, graphs processing and machine learning… yum.

Redis quenches Spark’s thirst for data. spark-redis lets you marry RDDs and RcDSs with just a line of Scala code. The first release already provides straightforward RDD-parallelized read/write access to all core data structures and a polite (i.e.SCAN-based) way to fetch key names. Furthermore, the connector carries considerable hidden punch as it is actually (Redis) cluster-aware and maps RDD partitions to hash slots to reduce inter-engine shuffling. This is just the first release for the connector, and there’s another release coming Soon(tm) which may break and change a few things, so take that into consideration. Of course, since the package is open source you’re more than welcome to use/extend/fix/complain about it 🙂

1. ‘we’ being Redis and the talented Sun He @sunheehnus of the Redis community