Thursday, February 28, 2013

Software as a Service


I am currently watching some really interesting YouTube videos.
They contain lectures of a course called "Software as a Service" which took place in Berkeley in Fall 2012.

So if you are interested check out the link:
Computer Science 169, 001 - Fall 2012

My notes about the videos, I already watched:

Lecture 1:
Instructions, Overview, Programming Models (Waterfall Model, Spiral Lifecycle, Agile Manifesto), Testing, Formal Methods, Productivity, DRY, SaaS
Good overview. I already knew most of the content.

Lecture 2:
SOA vs. SW silo: good explanation
SaaS infrastructure, clusters, cloud computing, fallacies and pitfalls
Pair programming: learned some new views about pair programming
Introduction to ruby: interesting language, not relevant for my thesis

Lecture 3:
All about ruby: everything is an object, poetry mode, ruby oop, meta programming, loops ...
Very interesting programing language, quite efficient syntax, little code does a lot of work

Lecture 4:
High level view of architecture: very good overview
The model view controller-pattern: how to organize data
CRUD: create, read, update, delete
Data mapper vs relational databases (good to know about), REST (good explanation)
Rails as an MVC-framework: not really interesting for me

Lecture 5:
MVC-Framework - Model: Databases & Migrations, automation
Rails cookbock: Not important for my project as I don't use rails
#1: Adding features (model, view, controllers)
#2: Add a new action to a rails app
#3: Create a new submitable form
Everything that is important and big should be stored in a database!

Lecture 6:
Debugging, RASP (Read, Ask, Search, Post): Sounds plausible
MVC: Thin controllers and views, put real code in models
Test driven design (TDD): tests implementation
Behavior Driven Design (BDD): write requirements down as user stories
User stories: 1-3 sentences in everyday language, fits on 3x5 inch card, write with customer, should be SMART (Specific, Measurable, Achievable, Relevant, Timeboxed): want to implement that for project
Cucumber and Capybara: Rails specific
UI: storyboard, am I allowed to put a storyboard in a scientific paper?
General question: Am I allowed to cite something from the Berkeley-course in my paper? Or are quotations only allowed from books and well known web sites?

Lecture 7:
MVC: the View-Layer: Haml = HTML on a diet
Explicit scenarios: part of acceptance tests
Implicit scenarios: logical consequence of explicit requirements
Imperative: specifying logical consequences, often complicated
Declarative: try to make a domain language from steps, better choice
Pitfalls: customers who confuse mock-ups with completed features, sketches without storyboards, adding cool features that do not make the product more successful, trying to predict which code you need, before you need it --> sounds all logical
Remember: debugging sucks, testing rocks
HTML: general overview, invented by IBM: not a lot of new stuff for me
CSS: introduction: I already learned about it
Comments should describe things that aren’t obvious in the code: why not what!

Lecture 13:
Measuring Productivity: Pivotal Tracker (I already created a project with pivotal tracker for my thesis)
Effective Meetings: SAMOSAS (Start, Agenda, Minutes, One, Send, Action, Set): sounds really useful
Version Control Systems: history, Git, solving conflicts, effective branching, forks, pitfall: good overview

I will update this post entry with every lecture I watch.


Wednesday, February 27, 2013

Simulation of a Geo Sensor


here you can see my prototypical simulation of a geo sensor programmed with C#:

Group Box Sensor:
  • Description
    For specifying the current sensor. As I am working with weather data it is "humidity" here. 
  •  Sensor origin
    A file, where the data of the sensor is stored.
    In my example the file is very simple and locks like that:
  •  Interval
    Here you can specify the interval in seconds

Tab control "SVN":
  • Local Destination
    A file-name in an existing local repository, where the data should be stored
  •  Commit Message
    A message that will be stored for the SVN-commit

Tab control "Git":
  • Local Repository
    A path to a local repository
  • Local Destination
    A file-name in an existing local repository, where the data should be stored
  •  Commit Message
    A message for the SVN-commit
Import data in a Subversion repository:

When clicking the button "Start", data is imported from the geo sensor. The data is displayed in the view on the right side. The new data is then stored and committed to subversion.

Import data in a Git local repository:

When clicking the button "Start", data is imported from the geo sensor. The data is displayed in the view on the right side. The new data is then stored and committed to Git.

The same thing is possible in Java with the libraries "SVNKit" and "JGit". I already programmed two test programs with those libraries too. But I decided to stick to C#, because the later implementation of the visualization of the geo data will be done in C#.

Problems encountered:

I already encountered speed problems when working with SVN. I used an SVN repository from the University of Applied Sciences Salzburg. When I read the geo data like every second, the storage of the data in de SVN repository takes like 5 seconds. You can see the log message here:

It is no problem for the weather data which I am using, because that data comes in with a period of 5 minutes. But if you use faster sensors, five seconds for storing the data will be too long.
I don't have these problems with Git. But I have to mention, I store the data in the local Git repository, not in the remote one.

If you have any remarks or suggestions, feel free to comment :-)


Evaluation and comparison of VCS


today I want to take a closer lock on existing version control systems.

The following table shows a general overview of existing centralized and decentralized systems:

For further evaluation I chose three of the versioning systems in the table. The version control systems Subversion, Git and Mercurial have been picked, because they are currently the most popular systems and they are open source applications.

Apache Subversion (SVN)

The development on Apache Subversion started in October 2000 by CollabNet. In February 2010 it became an open source Apache Software Foundation Project. SVN is a centralized version control system. Collaboration with other developers, even in remote locations, is possible since SVN uses HTTP. HTTP (Hypertext Transfer Protocol) is a standard protocol which is allowed by most firewalls [7].

SVN offers a lot of features like for example a merging tool, branching support, commit messages and a whole lot more. It tries to solve conflicts if two developers have being working on the same place in a file. SVN features also true atomic commits. That means that either a whole commit completes or nothing is committed. That helps repositories to not become corrupted due to incomplete data [7].

Because SVN is open source, easy to learn and offers a lot of features, that other version control systems in the past didn't had, it found a wide adaption by a large number of companies. That results in a wide support with third-party applications. Nevertheless, Subversion suffers from the disadvantages centralized file systems have. If using a slow internet connection the speed of updating or committing data goes down rapidly. Also SVN merging abilities suffer if a file is not cleared of the additional code generated [7].


After the free version of Bit Keeper was removed from the market Mercurial was started to be developed in April 2005 by Matt Mackall. Mercurial is open source and a decentralized versioning system including all advantages and disadvantages of that kind of systems. For example changes are most of the time just pushed to the local repository which gives a huge speed increase. When pushing to a remote location that can be set up with SSH. SSH (Secure Shell) is very similar to the standard HTTP protocol but more secure. That can be an advantage if all HTTP ports are closed in a locked-down network for whatever reason [7].

Mercurial is programmed in Python, which ensures good cross-platform compatibility.  It is mostly a command line tool but there are also graphical implementations available. Mercurial offers some nice features like for example changes can be exported to a file. Another user can import that file to a remote repository still under the original name of the first user. This can be useful if new code has to be reviewed and approved of other team members before committing [7].

Because of all that features Mercurial found a wide number of users like for examples the companies Mozilla, Netbeans or Growl [7].


Git started to be developed nearly the same time like Mercurial. Linus Torvalds, the inventor of the Linux kernel, programmed the first version of Git in just 4 days. Git was developed for managing the source code for the Linux kernel development with two core ambitions: speed and security. It is a decentralized version control system [7].

Git takes a special focus on rapid branching. In Git it is possible to make separate branches (so called Git stashes) for special features that can be merged back in the repository after they have been finished. Git is also very scalable. Even if managing a huge project with it doesn't slow it down [7].

The local use of Git is quite impressing, but there are also disadvantages like all decentralized version control systems have. In addition, the set up and learning curve of Git is not as easy like with other systems. The communication with a remote Git repository requires having SSH keys for the local and the remote machine.  Nevertheless, there are a lot of books and online resources available for getting to know this versioning system [7].

Selection of two suitable systems for implementation

For the next part of my work, the implementation of version control systems for the processing of realtime geo data, I decided to use Subversion and Git.

These two systems have been chosen to have a comparison between centralized and decentralized systems. They are two very popular systems which are used in many companies. Git won over Mercurial because it is faster and offers more (branching) features [7].

[1] Apache Software Foundation. Apache Subversion.,
March 2013.
[2] Perforce Software Inc. Perforce., March 2013.
[3] Microsoft Corporation. Microsoft., March 2013.
[4] Pearce Shawn Scott Chacon, Hamano Junio C. Pro Git. Apress, New York, 2009.
[5] Matt Mackall. Mercurial., March 2013.
[6] Canonical Ltd. Bazaar., March 2013.
[7] Chris Kemper and Ian Oxley. Foundation Version Control for Web Developers.
friendsofED, New York, 2012.

Friday, February 22, 2013

Version Control Systems


as I will work with Version Control Systems a lot, I want to post a short introduction:


Version Control Systems (VCS)  are tools helping developers to manage changes of software. They are used for documentation, sharing and merging of code. This is an essential part in software development and for the success of projects. Software is usually developed in teams which members work parallel on the same code. So it is very important to have a tool for sharing and merging changes [1].

Because this task is so important, there are already a lot of tools available. In my project I want to take use of these systems for handling real-time geo data. The process of merging geo data streams to each other is nearly the same like merging code of several developers to each other. Also documentation is needed for handling sensor data.

In general there can be made a classification of two types of version control sytems: distributed and centralized systems.

Centralized version control systems

Centralized version control systems consist of one single server that contains all the versioned files and a number of clients that are allowed to down- and upload files to that server. This architecture has been a standard for many years [2].

  • Advantages:
    Easier to learn
    Everybody in a project knows what others are doing at the moment
    Administrators have control over what everyone is doing
    For administrators a central version control system is easier to deal with
    Used in many companies
  • Disadvantages:
    The centralized server is a single point of failure. When for example the server goes down for one hour, nobody can work with the version control system any more.
    If the hard disk, where the central server is stored, becomes corrupted, all the data is lost. So the making of proper backups is a central aspect when using central version control systems.
  • Examples:
    Subversion, CVS, Perforce
Distributed version control systems

In a distributed version control systems each client fully mirrors the whole repository [2].

  • Advantages:
    Every checkout of a client is a full backup of the repository. If the repository goes down, any of the client repositories can be copied back on the server and fully restores the system.
    A hierarchical model for workflows is possible. It is possible to work with different people in different groups within the same project.
    Speed increase
  • Disadvantages:
    More complex to learn [3].
  • Examples:
    Git (can also be used as centralized version control system), Mercurial, Bazaar, Darcs

[1] Gilad Bracha Matthias Kleine, Robert Hirschfeld. An Abstraction for Version Control Systems. Universitätsverlag Potsdam, 2012.
[2] Scott Chacon. Pro Git. Apress, 2009
[3] Ian Oxley Chris Kemper. Foundation Version Control for Web Developers. Apress,


My plans are to make a evaluation of different systems and chose then two options for a practical implementation. My two current favorites are Subversion and Git. I think they are the two most common systems in companies. I also want to have one example of a centralized and one of a decentralized version control system in my thesis. But we will see ;)

Cheers, Tanja

Geo Sensing in a general context


as the use of my project is to improve the data management of geo sensor networks, I want to post something about geo sensing in general:

Geo Sensing

Today geo-information technology is a rapidly rising discipline. The data is mostly collected via airbone and orbiting sensors using photogrammetric techniques, but how has it all started? In former times geomatics were known as land surveyors. Land surveying needs a lot of mathematics and physics. The German mathematician Carl Friedrich Gauss (1777-1855) spent about 20 years of his life with establishing a geodetic
coordinate system. Using photogrammetric for collecting geo-data has been started in the nineteenth century. Geo data is the most important requisite for doing research in the field of geoscience and for getting to know Earth-related processes better [4].

For collecting these data, the use of geo sensor networks created a lot of new opportunities in the recent time. Geo sensor networks communicate wirelessly, they are sensor enabled small devices and they can be distributed throughout a geographic environment [5]. So they can be defined as networks that monitor phenomena in a geographic space [6].

Examples for Geo Sensors [1]

Fields of application

Geo sensor networks open a wide range of new possibilities, like the following examples will show. Therefor three application types can be distinguished based on their observation characteristics: [1]

  • Terrestrial Ecology Observing Systems
    In these systems continuous monitoring is usual. For example the observation of the growth or the health of plants. In Australia a project was started in 2006 for monitoring the growth circumstances of a nectarine orchard. The orchard had been covered with about 270 sensors and a gateway connected to the internet [1].
  • Geological Observation Systems
    These systems describe real-time detection applications like a volcano sensor network deployment. For example, the volcano Mount Pinatubo on the island of Luzon in the Philippines erupted on the 15th of June 1991 after about 600 years of dormancy. For observing the volcano scientists are interested in monitoring the mud flow which is the dynamic spatial field in this example. With the help of geo sensor networks they can find out if one of the major tributaries has split or if the mud flow is still expanding [5].
  • Aquatic Observing Systems
    That group describes geo sensor systems that are mobile or attached to mobile objects. Mobile objects can for example be cars, animals or ocean buoys. Also tsunami early warning systems or coastal and ocean observation systems belong to that group [1].


[1] Silvia Nittel. A survey of geosensor networks: Advances in dynamic environmental
monitoring. Sensors, page 15, 2009.
[2] Scott Chacon. Pro Git. Apress, 2009.
[3] William I. Grosky. Senseweb: An infrastructure for shared sensing. Media Impact,
page 6, 2007.
[4] Mathias Lemmens. Geo-information, Technologies, Applications and the Environ-ment. Springer, 2011.
[5] Mike Worboys Matt Duckham, Silvia Nittel. Monitoring dynamic spatial elds
using responsive geosensor networks.



please read my proposal for getting to know my current project better:

Proposal Geo Sensing

Cheers, Tanja


Hello and welcome to my blog.

I study Information Technology & Systems Management at the University of Applied Sciences in Salzburg, Austria. I currently write my master thesis about geo sensing at the Hawaii Pacific University in Honolulu, Hawaii.

University of Applied Sciences Salzburg
Hawaii Pacific University - HPU

I want to share my weekly progress of my work on this blog.

So feel free to comment, add suggestions or even help me to find spelling mistakes.

Mahalo, Tanja