Pages

Thursday, December 19, 2013

5 in 5

IBM recently published their recent 5in5 predictions for the near future (5 predictions for the next 5 years). The predictions look to me like logic consequences of the current technologies and approaches applied to a broader areas of use. The key principle is, that computer become smarter, cognitive and more adaptive to their context.
I'd like to share you my view on these five predictions.
  1. The classroom will learn you. Probably a good way of using the potential of a learner and supporting them in the best way possible. Applied to public schools this would be of great help to give our kids the best chances. But when I look at the current and past state the schools are in, I doubt this will become a reality at least for public schools in 5 years as cost pressure and strong conservative forces prevented major innovations in the public education system. At least physical punishment is forbidden in schools nowadays :). Best chances I see for private schools or even more adult education.
  2. Buying local will beat online. I doubt this will happen. The major constraint is time and the biggest advantage of online shopping is the separation of the process of buying from your location and time, allowing even customers with stuffed schedules to make purchases of whatever they want. Another downside of computerized shopping, as we could see with existing online shops like Amazon are the tailored offerings matching customer interests, needs and behaviors to maximize the probability to sell something in order to maximize the profit (because that's the basic intent), and people get used to that (because they obviously like it), might lead to a (consumer)life that's totally dictated by computers. People might trust, the computer is always right, and stop scrutinize decisions. Today, going to a local shop allows us to flee from computerized proposals and simply explore.
  3. Doctors will routinely use your DNA to keep you well. Might be great to detect and treat certain diseases. But what is is value of Death? That we value life. The question (even today) is, what should be treated because it preserves life and what not because it preserves dignity. The other constraint is, as long as there is the effective danger that state authorities capture and analyze these data and possibly derive executive actions from that, I doubt anyone will seriously put their personal "blueprint" into the cloud. An example of what real consequences a simple but erroneous computerized decision could create shows a case I read about recently. A harmless money transaction to pay a bill was blocked as its payment reason contained "Südanflug" (Landing Approach from South) which the banking computer systems - not being able to deal with umlauts - interpreted as "Sudan-Flug" (Flight to/from Sudan) and a raised a terror warning.
  4. A digital guardian will protect you online. Key question is, which decision has priority. Who may overrule whom? Some recent cases from the aviation industry are good examples, what could happen, when computer decides based on algorithms and sensor inputs, but does not capture the overall situation completely. One of those cases was a landing approach at the aiport in hamburg during stormy weather with heavy cross winds. On touchdown of one of the wheels, the computer switched to ground mode, limiting the maximum angle of the rudder, but that was needed to compensate the strong cross winds. The result was, one of the wingtips touched ground, but the pilots could resolve the situation by going to full throttle and take off again. If computers could ever capture the entirety of the users context, it might be possible to really help. But again, I fear the dangers of digitally capturing the entirety of your life as long as state authorities are able to misuse them.
  5. The city will help you live in it. Surely, the computer might propose things I might not find easily. The main driver however is still - pretty similar to point 2 - money, to maximize profits, i.e. by sending the people to places where it's most likely to spent a lot of money. So the computer will most certainly make suggestions based on my habits, location and probably my network. This might result in proposing the same bar over and over again. But I could have that even without a computer, and sometime I simply want to try something different. The goal should be to broaden the ken, not limit it, then it will serves a purpose.
Don't get me wrong, I'm quite looking forward of what the future will bring with all the existing and upcoming technological possibilities. But I'm also skeptic, what of it will really serve a purpose - to help the humans being better humans (which is desirable), and not what will maximize profits (which is the real driver behind A LOT).
So the key question is and will be, what roles will humans and machines play in the world. What is the value of an individual's life? When will the machines really serve us, freeing most of us from having to be "worker drones" and allowing everyone a life of self-determination, that is not constrained by the (un)availability of wealth?
I believe these question won't be answered by computers, but by society. So if computers get smarter and more able to learn, we should strive to make the same progress for humans, and we will achieve a lot more.

Tuesday, December 17, 2013

Git vs SVN




Last friday we had a discussion in our office, which software versioning tool is better, Git or Svn.
I am sure there are quite some discussions around this topic in the web, and the simple answer is as usual: that depends.

This "that depends" is described in more detail in an interessting StackOverflow repsonse. Further I found a comparison on kernel.org quite interesting which I'd like to summarize for my own purpose. The following comparison table should give an overview but doesn't claim to be complete.

GIT
SVN
Architecture
Distributed, Peer-2-Peer topology.
Supports perfectly distributed, autonomous working teams or developers, without permanent network connection.
Centralized, Client-Server topology.
Supports well centrally organized teams with permanent network connection. Backup is easy.

Access ControlNo effective read/write restrictions imposable. (OpenSource Mindset)Fine grained ACLs possible.
BackupBackup required on every repository location.Backup-Strategy only on central repository required
Checkin/CheckoutA complete checkout of the entire repository is always required. Difference between commit to local repository vs. commit to remote repository. Partial Checkout possible, the is only one repository to checkin/checkout
BranchingIs an intrinsic core concept as every repository (clone) is a branch in itself. Supports only single-branch-commit, which is less flexible but more simple and therefore better automatable.Is more flexible and allows Multi-Branch-Commits, but this increases complexity making it more difficult to understand. More conflicts occur that need manual attention.
Binäry files Supported, but some changes might cause a version split to a new file, as the binary change is too big (i.e. increase light values of an image)Supported, versioning is always related to a single file. No "unwanted version split".
PerformanceFast, because most operation run locally against the local repository.Slow, because most operations are performed against a remote server.
Space RequirementsThe repositories itself are smaller, but the more (full) clones exist, the more overall (distributed) storage capacity is required.The central repository might become huge, as the version histories are kept. Client copies might be smaller, as only partial fragments might be checked out.
IntegrationAs it was initially designed for the Linux development community the primary interface is still the command line. Integration to tools or Windows (Explorer) is less advanced and still in development or even not existant.Integration to tools or operating systems (Windows, Mac, Linux) is good thanks to a variety of proven tools, such as Tortoise, Subclipse, Subversive, etc.
VersioningUses SHA1 to produce version identifiers, which are less well readable and not predictable.Uses sequential, increasing revision numbers which are short and well predictable.
Learning CurveHigh, as it requires a paradigm change, there are new terms and concepts. Repository structured might grow complex, weaker tool support. Big and growing community.Low, Proven and well document. Architecture, repository structures and workflows are easier to understand. Good tool support. Big community.
Typical usersTech-savvy, open minded, open source oriented, explorers. Typical early-adopters.Less tech-savvy, conservative, workers. Typical late followers.

Concluding, the preferred tool depends on the scenario respectively the requirements it should serve.

If you have situations where checkin/checkout mechanisms are required when no internet connection is available, or you have distributed developers or autonomous, self-organized teams (like in Scrum), and the rest of the infrastructure (build, deployment) does support that mode of operation - Git will be the tool, as the advantages outweigh the disadvantages.

If you're not a typical early adopter (that means, are not open minded enough to embrace new paradigms), have a centralized infrastructure (build, deploy, backup), work typically at or close to the same place with always-connected network availability, and work most of the time on Windows - stick with SVN, as the advantages of Git have no effect and the disadvantages outweigh.

I for myself have never worked with Git before. The companies I have worked for typically used a centrally organized repository, which was not always Subversion (ever had the joy of working with Integrity MKS?). For my personal open source projects, I prefer SVN simply out of habit.

What do you think of it? Let me know.

Friday, December 13, 2013

Drawing UML Diagrams

Recently we've introduces a CASE tool in our company for being able to model before we code. From my own experience I have the perception, there are for more bad diagrams out in the world than diagrams that really help understanding the problem or the solution. This made me think about best practices for drawing UML diagrams which I could share with my colleagues, that I would also like to share with the rest of the world.

Without reading what's already available in the net (i.e. the UML Best Practices: 5 rules for better UML diagrams) I came to the following list of principles you should consider when drawing diagrams:
  • Be Creative. Designing is a creative task, there is no right or wrong. Three designers might produce four different solutions for the same problem which all work well. Sou you also might consider different ways of expressing certain aspects. Take your time to step back and ask yourself, will my intended audience understand, what I want to express? Or why not?
  • Focus. People tend to overload a diagram. Concentrate on expressing only one aspect. Design a racing car or truck, but not both (and not even think about a boat). Isolated aspects should be expressed in a separate diagram, the connection between two aspects can be captured in an overview diagram. Similar to digital photos: more diagrams do not cost more money - but more time to understand will do.
  • Keep it Simple. From the psychology of perception, there is a limit of how many elements one could easily recognize without having to think too much. This limit is different for each individual but as a rule of thumb it lies around 10 elements. So if your intention is, to make your audience understand the problem as quick as possible (i.e. in a presentation), reduce the number of element to below 10. If your problem is not distributable across multiple diagrams or the audience will has the freedom to take time to understand - which both are rarely the case - you could have more than 10. But always be sure about the consequences, the time to understand will increase much. The number of elements on a diagram is no measure for complexity of the problem but for the inability to understand the real problem.
  • De-Clutter. Remove everything that is not relevant. While 'Focus' and 'Keep it Simple' aims at reducing the number of elements (i.e. Classes) De-Cluttering aims at reducing the number of additional information, such as labels, cardinality information, visible associations. In model-centric modelling tools you create a datastructure with diagrams showing the structure from different perspectives, this means, that removing elements and additional information from a diagram does not mean, you remove them from the structure. Labels of an association specify a certain role, this role might be totally irrelevant for the diagram you're drawing. Same for the cardinality or additional associations, like package membership. So always check, does the information help to quickly understand the concept your depicting? If yes, keep it, if not remove it.
  • Order. Avoid Chaos. A viewer of a diagram should be naturally guided through the diagram. There should be a natural reading order that supports the understanding of the diagram. Arrange the elements according to their meaning (front/back, client/server, consumer/producer) starting top-left in the directions (or combinations of them): left-to-right, top-to-bottom, outside-in or inside-out (starting the the center). Avoid crossing lines. In cases where it is not avoidable, use jump-links. Sometimes it helps to simply rearrange elements to avoid crossings. Use diagonal lines only, where it suits. I prefer diagonal lines in use-case diagrams, while using orthogonal lines in more technical diagrams (class, deployment,...). 
  • Compress. Reduce the white space as much as possible. Arrange the elements closer to each other, reduce their size as much as possible. Avoid diagrams that are too big so you either have to scroll or zoom out to see everything.
In summary, a checklist of criteria for good diagrams could be:
  • looks "nice"
  • arrangement in logical-order
  • element count < 10
  • diagram-size < display-size (without zooming)
  • more orthogonal lines, less diagonal lines
  • no crossings 
And one more advice. When drawing UML diagrams, be sure about the semantics of the (graphical) notations. Don't just use certain shapes because they look nice, to someone familiar with UML they might easily reveal whether you understand UML or not.

    No EECH over EOS

    Sounds cryptic, but if you remember, I tried to connect the EECH telemetry output via the siconbus with the Helios "Virtual Cockpit" using the EOS bus protocol. The bad news is: it won't work that way - for two reasons:
    1. the binary indicator information of EECH (various indicator lights) seem not to be exported. It seems to be a bug of a previous release that was either not fixed or reoccured. Anyway, there is no way at the moment to export the indicator light status and therefore there is no chance to map them to any Helios indicator light. The good news however is, the bridge from EECH-to-Helios over EOS works in principle, although it's not relevant with the current bug.
    2. There is no chance of mapping the telemetry information (altitude, speed, etc) to any of the analog inputs. The reason is, the analog EOS inputs have a discrete value range of 0 to 1024. For certain gauges (i.e. Altimeter), Helios only accepts continuous float values. The CommServer of EECH however does export float values. So there is simply one conclusion: EOS is not the right protocol to transport the telemetry data.
    These two results lead to the following next steps.
    First, I have to address the indicator export issues to the EECH developers community, so that those information will be exported again via CommServer.
    Second, I have to find a different way of transporting the (float) telemetry data to Helios, and I already have an idea for that: Helios was initially created to receive the telemetry data from the LUA export of DCS. The format exported is a bit different to that of EECH (not much) and customizable (great!). Further, Helios provides a LUA export script for DCS out of the box, so I could use that. The major difference between DCS Lua Export and EECH CommServer is, that DCS pushes the data while EECH responds to requests. As Helios was intentionally designed to communicate with DCS, it expects the data to be pushed to it. And here comes in the siconbus. The idea is, to connect a "Pull"-Connector, that polls EECH for telemetry data with a "PUSH"-Connector that pushes the data to Helios. The polling interval is controlled by the Pull-Connector and the data is transformed on the bus.
    Now I just need to find the time to implement this idea...

    Thursday, December 12, 2013

    Device Link

    In the last weeks I've been a bit away from my simulator project as I was busy with education and certification activities. Anyway, I've finished the implementation of the device link support for the simulation interconnect bus (see last blog post). The device link protocol is rather simple, as it consists only of request and response messages with a set of key-value pairs. If its just a key, it marks a request for a value - usually sent in a request message - and if it contains a value, it denotes either a set request or a response to a value request. The protocol runs over UDP and I tested it with the Commserver of Enemy Engaged.
    With Device Link API that I've implemented, it should be relatively easy to create a DeviceLink Client (or even Server) that is capable of sending or responding to requests. The API basically consist of two main interfaces:
    • DeviceLinkPacket - which could be a request or a response and may carry a list of parameters
    • DeviceLinkParameter - carrying the numerical id of the parameter and marks the parameter as either a getter or a setter parameter and may carry a value of the parameter's type
    and a couple of supplemental interfaces (DeviceLinkParameterDefinition, DeviceLinkParameterSet and DeviceLinkPacketListener) and some default implmentations for the interfaces.
    A central utility class (DeviceLink) provides methods for creating request and response packets of the API types as well as parse a string into a DeviceLinkParameterSet. The DeviceLinkClient allows to connect to a DeviceLinkServer, such as the Enemy Engaged CommServer. With the listener interface, functionality can be added to the client to react upon incoming packets.

    For Enemy Engaged I defined an implementation of the DeviceLinkParameterSet (HelicopterType) and for the according DeviceLinkParameterDefinitions (EECHParameter) carrying the various parameters for the helicopters. Connecting to Enemy Engaged is rather simple and I implemented an example client (for my testing purposes).


    try (DeviceLinkClient client = 
    new DeviceLinkClient(HelicopterType.Any)){ 

      //eechhost is mapped to localhost in hosts file, port is 10000
      client.connect("eechhost", 10000);

      //output all packets to console
      client.addPacketListener(new DeviceLinkPacketListener(){

        public void onReceive(DeviceLinkPacket incoming) {
          for(DeviceLinkParameter p : incoming.getParameters()){
            System.out.println(p);
          }
        }

      });


        
      //poll the server periodically for values
      while(...){
        DeviceLinkPacket packet;
        ...                 
        final List params = new ArrayList<>();

        //request all telemetry parameters
        for(EECHParameter.Common c : EECHParameter.Common.values()){
          params.add(DeviceLink.createParameter(c));
        }
        packet = DeviceLink.createRequest(params);
      

        System.out.printf("\t --> (%s)\n", packet);
        client.send(packet);

      }
     

    } catch (IOException e) {
     ...

    }

    To test and play with API you may download the source code from (Simulation Interconnect Bus) - its not packaged yet.
    I'd be happy to know what you think of it.

    Friday, November 1, 2013

    New Software Project

    3 years silence and now two posts within an hours, well thats an improvement...
    Last night I opened a new sourceforge project (Simulation Interconnect Bus) in order to have a repository for my code as it seems, what I've started a couple of days ago might take a bit more time and I can code while travelling as well. But what is what I've started? Lets go back a bit in time.

    A long time ago I bought the Enemey Engaged: Apache vs Havoc and the Enemey Engaged: Comanche vs Hokum (EECH) helicopter simulators and spent quite some time with it. Over the years, although being a bit outdated from a technical perspective, a fine community extended the simulator with various options including exporting the MFDs to a second monitor and telemtry data over the DeviceLink protocol of IL-2 Stormovik.

    Jumping to year 2013, where I was about to trashed my 8 years old notebook when I had the idea to take it apart and see what components I could use from it. I extracted the panel, bough a controler board on ebay to make a standalone monitor out of it.
    Old laptop TFT panel with controler board as external display
    Over some evenings in the basement, I build my first and simple homemade cockpit from the panel and the Thrustmaster Cougar MFD frames. With that panel I was able to use the old laptop panel as a second monitor to display the EECH MFDs.


    A couple of weeks later I bought the Digital Combat Simulator series (DCS) and while learning how to fly I also tested varios community-build extensions. On of this extension was Helios, a tool to build your own virtual cockpits with gauges, panel lights and buttons that is able to connect to the DCS so that you have all you telemetry on a second or third screen. Of course my simple home-made cockpit was the major target for the exported gauges.

    A couple of days ago I tested the Helios in combination with EECH. Although not natively supported, I was able to "label" the programmed MFD buttons as overlays on the screen (making it easier to remember what the buttons actually do) and mapped some actions of the programmable joystick to info-panel light of helios. The result was a much more comprehensive overview of the current flight situation and improved controls.


    But I also wanted to have the telemetry data on the second screen. So I investigated a bit more and came across some supported interface in Helios called "EOS Bus". The EOS Bus and its protocol was intended to connect external electronic boards like Arduino via Serial Bus to Helios in order to map external switches, controls and indicator leds to event in Helios respectively the Simulation. So the idea came up to build a bridge between the UDP based DeviceLink protocol used by EECH (EECH CommServer) and the Serial Bus based communication of the EOS and Helios and this was the hour of birth of this project.

    Long time no write

    It's been a while since I posted something into this blog. A lot of things happened since back then. To make it short, I quit my job at IBM due to total absence of projects, got married, became father, started working at a swiss bank, became father again, left the bank and now I am working for a small company doing consulting in the Enterprise Content Management area. So if any of my readers (if there are any?) will expect some more postings regarding Lotus Connections or any other Lotus software (well, hope dies last)... I recommend to look somewhere else.

    I intend to revive this blog and write more about what I do in private life and less what I do at work, but nevertheless I hope it might still be interessting to someone although the target group might change a bit. My recent activities regarding hobbies have been around flight simulation and home-cockpit building, although I focus less on the physical cockpit part and more on the virtual parts running on the pc.