Pages

Mystery of the Shared Library - Solved!

The first technical problem to solve for the ONF submission was figuring out the naming and compiling of a shared library. The second was to link it with the application (SDN controller).

The secret sauce is the Makefile. Below are the instructions. The complete code is available here. Go ahead and test it out.

1. The naming: These names have very specific use while installing the library. Take notice.

major version: Anytime the API changes, the major version needs to increment. Numbering starts at zero.

minor version: Any upgrades to the library that does not have API changes increments the minor version. Numbering starts at zero.

name:  Pick the name you want to be used in the -l switch when linking this with the application.
In the github example, this is smalle.

library name: lib<name>.so
In the github example, this is libsmalle.so.

soname: lib<name>.so.<major ver>
In the github example, this is libsmalle.so.0

real name: lib<name>.so.<major ver>.<minor ver>
In the github example, this is libsmalle.so.0.0.

The library is compiled to create a file with the real name. The soname and library names are symlinks created at the time of installing the library.

2. GCC flags:
Sources are compiled to object files using CFLAGS with gcc. The must-have CFLAGS for a shared library are:
CFLAGS := -fPIC -Wl,-export-dynamic

fPIC - generates position independent code.The alternative is fpic which is not supported on all platforms.

-Wl,-export-dynamic - passes the export-dynamic flag to linker. This is required to support callbacks in the library.

The object files are linked to final library ELF with 'real name' as the filename. GCC with LDFLAGs achieve this. The necessary LDFLAGS for a shared library are: 
LDFLAGS := -shared -Wl,-soname,$(SONAME) 


3. Installing the library:
Copy the shared library file to /usr/local/lib and run ldconfig to install it. Add the path /usr/local/lib to environment variable LD_LIBRARY_PATH. Copy the library header to /usr/local/include. 


4. Linking to the library:
Compile the application with -l<name> and -I/usr/local/include.
Make sure you #include the header in the application code.

 
Reference:
http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html

Volunteering for littleBits at Maker Faire San Mateo 2013

On a wholly different note, I was at the Maker Faire Bay Area this year, volunteering for a little company with little products that I am a huge fan of - littleBits.

LittleBits are building blocks with electronic circuits for sensors, motors, connectors, and logic that attach with magnets to create larger circuits. What a fun day that was - imagine getting to tinker with every type of 'bit' and even the prototypes while demo'ing them to kids of all ages!

The Eagle files for all littlebits are open sourced on github.

Git and Github - setup, workflow and learnings

Git has arrived and is here to stay. The learning curve is steep and frustrating. But the results are rewarding to say the least. I transitioned not too long ago from the simplicity (and accompanying inflexibility) of CVS. I can even hear myself groaning at the cruel change in terminology (commit is local... aaargh) and no direct means of applying CVS concepts in git. Sidenote: do not bother looking for git equivalence of CVS commands. I groaned until the moment I saw light.

The CodeChix ONF driver project collaboration would not have been easy without the power of git. But git in itself wasn't sufficient for our purposes - we also needed an online hosting service for the repository. We chose github.

Github has additional mechanism defined to manage collaboration in its hosted service which can be yet another source of frustration if not understood well - more on that later.

Specifically, we extensively used these features:
1. Fork
2. Pull request for codereview and merge
3. Pull from 'upstream'

The alternative to the above workflow is to clone directly from the project and push directly into it. I did not favor this approach as it does not allow for an intermediate step of reviewing the code. Pull requests are built for code reviews and explicit merges by the repo manager.

Here's the setup in great detail:
1. The main repo has a topic branch (called 'dev-onf-driver') apart from the master. This main repo with its 2 branches is our 'upstream'. You can set one or the other as the 'default' branch via the online github interface.

2. Fork - also executed online - creates a copy of the upstream in the collaborator's online github account. This is the collaborator's 'origin' and has both the branches.

3. git clone <path to origin>
Each collaborator clones the origin to their local development machine.

4. git checkout -b <branch name> --track <remote branch>
This step is necessary to clone any additional branches from the origin. The names/paths of all remote branches are listed in 'git branch -a'

5. git remote add upstream <path to upstream>
Necessary for pulling latest changes from upstream. The upstream (as noted in #1 above) is the main repo to which all collaborators will merge their changes via pull requests.

This completes the setup.

The typical workflow with this setup is:
1. Merging changes to upstream:
    Each collaborator does the following to merge to upstream:
      a. A series of 'git commit' followed by 'git push' when ready to merge. The changes are now updated in the collaborator's 'origin'.
      b. Login to online origin repo and start a 'pull request'. Edit the repo:branch combination to select the correct upstream and the correct origin branch. After confirming the changes displayed on the page, initiate the pull request.

2. Codereview:
    Every time a pull request is generated, it gives the opportunity to the other collaborators to review and comment on the code. The pull request can be cancelled or updated with changes.

3. Merge to upstream:
    Once the codereview is complete, the pull request is merged to upstream.

4. Pull changes to all collaborators' repos:
git pull upstream <branch name>
eg: git pull upstream dev-onf-driver
This is possible only after stashing or committing the changes in the repo. Once the local repos are updated, the origin needs to be also brought in sync with the upstream by:
git push

Our experiences:
1. Pull requests are *not* very intuitive. The pull in this context refers to pulling a merge branch. Pull in other git context refers to updating local repos with code from upstream. Getting pull request right in concept and in practice is a struggle and cause for many a mistake.

2. Collaborative merge permissions can be dangerous. It is best for the merges (from pull request) to be controlled by one owner. It is terribly easy to pull-req/merge with incorrect base and origin branches/repos. Reverting this is not as easy.

3. The only one way to update the online 'origin' repo is by doing a 'git pull upstream <branch>' followed by git push. There is no online mechanism to achieve that. This can be annoying but if the workflow is strictly established for changes to travel in uni-direction, this is not a problem. In our chase, the graph-edges were always uni-directional. upstream -> local -> origin -> upstream

4. git push to upstream. Like all things git, this too is possible but in a workflow like ours, dreaded!

What we may change next time:
Evaluate other means of codereview. Gerrit and Jenkins will be tested out for ease of use and cost. Pull requests are cause of many lost hours of productivity and will be avoided if possible.

200

Nervous and excited, we clicked the Submit button at 9:05PM on Sunday. Our code is being reviewed by judges of the ONF driver competition as I write this!!!

Through the day we wrote up documentation, cleaned up code, debugged One Last Bug (more on that later), wrote test plan results, and put together miscellaneous submission documentation.

And we celebrated! Thanks entirely to Rupa, The Awesomest,  who secretly planned the celebration for months, we learnt later. Champagne, cake, photo shoot.. I drift..

More on One Last Bug:
Our target was to pull together the submission folder at 4PM, do final reviews, one last test run of the SDN controller and submit at 5. All went well. Until the final test run. There was that One Last Bug, staring right at us. The test was running fine all the way till the very final step of cleanup and then failing on a mutex lock. Horrors - a synchronization issue!! Panic. Adrenaline.

It took 2 of us 3 hours to exterminate that one. A mutex lock was kicking in *after* that mutex was cleared and ofcourse failing on invalid argument. Ofcourse! But hours to the submission deadline, such things are not that obvious at all.

One Last Bug, we got you too!

Back to the title of this post. 200. We couldn't have planned this!



Sneak Peek - T minus 1

A real-time update...















The wireshark capture in its full glory..















Sneak Peek - T minus 2

Friday came and went and we got so buried into the submission work that this post almost didn't happen until Ramya - the rockstar coder of our team - made a mention.

So where are we?
The first hello packet is now received by the controller and a hello reply sent out. It's alive!!!!

Some interesting challenges we have been working on:
1. lots of zero sized packets received on sockets - what is the source? are these tcp control packets and if so, why are they punted up? often, read_len is zero even with valid message in the buffer.

2. a change in version of library make the .so suddenly unusable by the application. Why? This is the suspicious diff:

edeedhu@ubuntu:~/edeedhu-git/CC-ONF-driver$ git diff Makefile
diff --git a/Makefile b/Makefile
index fbd20f4..432766f 100644
--- a/Makefile
+++ b/Makefile
@@ -3,7 +3,7 @@ CC     := gcc
 LDFLAGS := -shared
 LIBS   := $(shell pkg-config --libs glib-2.0)
 RM     := rm -f
-MAJOR_VERSION := 0
+MAJOR_VERSION := 1
 MINOR_VERSION := 0

3. what is a good method to manually do a static analysis for synchronization? Here's what I came up with - xml style markeup to follow different codepaths and track locking/unlocking of mutexes. How would you have done it?


















Now on to some gdb work.

Sneak Peek - T minus 3

Debugging debugging debugging. What would we do without printfs and gdb..












Our discussions today ranged across locks, pollin events on sockets, swig bindings, mininet, and hash tables, with 2-3 simultaneous topic threads at any given time in the day. And at 7:51 PM, the night shift has just only begun. :)

Sneak Peek - T minus 4

A highly talented team at CodeChix has been working undercover (well, almost and not anymore) on a mission - create a software library for SDN. The goal? A working submission for ONF's Open Flow Driver Competition.

We are officially 4 days away from submission! With 7000 lines of code and testing ongoing at a feverish pace, the progress so far:

1. One end to end channel UP - check!
2. One message plumbed through from source to destination - check!

We are in the most exciting phase of all - the 'make it work' one. ;) More updates tomorrow!

A sneak-peek at our github private repo: