wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Metawork, nobody is capable but all participate grudgingly


This article is a translation/paraphrase of Professor Gunter Dueck's original post titled DD265: Metawork – keiner kann’s, aber alle machen ärgerlich mit (Mai 2016). Professor Dueck's philosophy resonates with me, so I'd like to make his thoughts available to a wider audience. Bear with my Gerlish. Remarks in brackets aren't part of the original text and are either my comment, extension or explanation. Here we go:

Metawork is your own effort to organize work (your's and other's), not performing the actual effort. It is about coordinating your contributions, more often than not, across multiple projects. This includes managing decisions (through eMail) and the communicate with all stakeholders. E.g. you can use efficient (Dueck used the word "fertile", but I'm not sure if that has the same resonance in English) meetings to establish the approach how to structure and execute working together. Over time a corporate culture emerges where a common good metawork forms the enabler for efficient execution of the core work (we'll learn another term for this just below).

In reality, however, there are quarrels in meetings, about who does what. Conflicts surface, everyone speaks their minds unfiltered, meetings drag on and on. People get a grudge and are annoyed and are left with the feeling to have wasted valuable time, they won't get back. Dueck checked the web, what it has to say about metawork. His favorite place is the the Urban Dictionary  where ordinary people contribute to difficult definitions and provide lots of suggestions. The best of them are the odd ones.

You rant online to be overburdened with unproductive responsibilities, unable to get anything done. People share that in a development project staffed with eight people only two of them code. The rest warms seats in the meeting room and is first in line for promotion if the project is successful. What a mess!

Hmmm. So your own work is productive, anything else a distraction. Not thinking about what your project members see as their "productive work". An example: If the developers miss a deadline, it generates a lot of distraction for the rest of the team. "Everything would be perfect if the coders would work properly! We have to integrate into SAP, everybody is waiting. What a cluster f**k (that's the closest cultural equivalent to "Supergau" I could think of)" - The two developers retort: "You could have contributed code, instead of babbling in all that meeting, we would be done by now"

This a clear indicator, Dueck sees in all corporations, that the different project members have no understanding of the tasks of their fellow members. If they do know them, they doubt the importance or usefulness. One's own work is important, anything else is a distraction. Others only interrupt. Then they quarrel in in meetings

  Why oh why? All are well trained for their own tasks and complete them quite well. However roughly none have been educated on metawork: how to get organized and collaborate. They do some of it every day, limping along without having or wanting to learn about it. It never had been a topic. They bitch the whole day about the drag of metawork without being able to fully grasp it, lacking a word for it, not aware of the term metawork. Managers and project leaders follow the prevalent methodologies and press forward. More often than not, they aren't aware of metawork. The manage or lead as "their own work", but hardly spend a thought on the work as a whole

Even when managers would know how to coordinate well and fuse the parts to a whole, how to deal with unknowns and avoid conflict - it falls short when their reports have no clue what is metawork?

When team members only spend half of their time with "their own work (e.g. programming)" and  are irate about the "stolen" time spend in meetings otherwise, they haven't understood the very nature of work - or metawork is done mind-boggling bad.

Metawork is about the principles and foundation of performing work. Those who haven't given it a thought, bungle in each project, wondering how it could work. Every conflict is new, different and unique. Each project has its own singular surprises. What a madhouse! Lots of literature reenforces that point of view.
However that's because one only focuses on their own tasks at hand, and never learn to pay respect of the significant other contributions.

Dueck suggested in his book „Verständigung im Turm zu Babel“ (Communicate in the tower of Babel) and his blog to contrast meta communication and mesa communication. „Mesa“ is greek meaning „inside“, „meta“ is like „and beyond“. In the context of work „mesawork“ would be the individual task at hand and „metawork“ anything beyond that. Dueck sees it over and over again: Nobody is really good at meta communication, anybody communicates off their chests. Similarly we are good at mesawork but bemoan the complexity of the world, since we can't relate to metawork.

Shall we leave it that way? Half of our time being experts, half of it clueless N00bs? Isn't the balance tipping towards cluelessness, since the need for metawork is raising in a increasingly complex world? How about you? Happy to continue only fretting?

Posted by on 2016-05-13 07:41 | Comments (0) | categories: After hours

Mach Dich auf die Socken!


A common requirement in corporate systems is "let me know when something is going on". In Notes we use "On document creation or update" triggered agents to process such events. To let external systems know about such a change R8 introduced the web service client. This works well in distributed system, but requires quite some work on both ends. In a recent case I had to optimize the communication between Domino and a task running on the same machine. The existing solution was polling the Domino API in short intervals for updates. Something I would call donkey mode. Sockets to the rescue. A few lines of Java in a triggered agent puts an end to donkey mode and provides the receiving end with all it needs in time:
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.Socket;

import lotus.domino.AgentBase;
import lotus.domino.AgentContext;
import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.DocumentCollection;
import lotus.domino.NotesException;
import lotus.domino.Session;

import com.issc.castle.domino.Utils;

public class JavaAgent extends AgentBase {

	public static String	sockethost	= "127.0.0.1";
	public static int		socketport	= 1234;

	public void NotesMain() {
		Session session = null;
		AgentContext agentContext = null;
		Database db = null;
		DocumentCollection dc = null;
		Document doc = null;

		// The socket elements
		DataOutputStream out = null;
		BufferedReader in = null;
		Socket socketClient = null;
		try {
			// Get the Notes parts
			session = getSession();
			agentContext = session.getAgentContext();
			db = agentContext.getCurrentDatabase();
			dc = agentContext.getUnprocessedDocuments();

			// Get the socket
			socketClient = new Socket(sockethost, socketport);
			in = new BufferedReader(new InputStreamReader(socketClient.getInputStream()));
			out = new DataOutputStream(socketClient.getOutputStream());

			doc = dc.getFirstDocument();
			while (doc != null) {
				Document nextDoc = dc.getNextDocument(doc);
				this.signalOneDocument(doc, in, out);
				Utils.shred(doc);
				doc = nextDoc;
			}

			// Mark them done
			dc.updateAll();
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			Utils.shred(doc, dc, db, agentContext, session);
			// Close them
			try {
				if (out != null) {
					out.close();
				}
				if (in != null) {
					in.close();
				}
				if (socketClient != null) {
					socketClient.close();
				}
			} catch (IOException e) {
				e.printStackTrace();
			}
		}
	}

	private void signalOneDocument(final Document doc, final BufferedReader in, final DataOutputStream out) {
		try {
			String notesURL = doc.getNotesURL();
			out.writeBytes(notesURL);
			out.writeBytes("|");
		} catch (NotesException e) {
			e.printStackTrace();
		} catch (IOException e) {
			e.printStackTrace();
		}

	}
}

No libraries to load, the only utility function used is Utils.shred() which is a error wrapped recycle call.
As usual YMMV
(Bad German pun in the title)

Posted by on 2016-05-09 10:46 | Comments (0) | categories: IBM Notes Java

Annotations to supercharge your vert.x development


ProjectCastle is well under way. Part of it, the part talking to Domino, is written in Java8 and vert.x. With some prior experience in node.js development vert.x will look familiar: base on event loop and callbacks, you develop in a very similar way. The big differences: vert.x runs on the JVM8, it is by nature of the JVM multi-threaded, features an event bus and is polyglot - you can develop in a mix of languages: Java, JavaScript, Jython, Groovy etc.
This post reflects some of the approaches I found useful developing with vert.x in Java. There are 3 components which are core to vert.x development:
  • Verticle

    A unit of compute running with an event loop. Usually you start one Verticle (optional with multiple instances) as your application, but you might want/need to start additional ones for longer running tasks. A special version is the worker verticle, that runs from a thread pool to allow execution of blocking operations
  • EventBus

    The different components of your application message each other via the EventBus. Data send over the EventBus can be a String, a JsonObject or a buffer. You also can send any arbitrary Java class as message once you have defined a codec for it
  • Route

    Like in node.js a vert.x web application can register routes and their handlers to react on web input under various conditions. Routes can be defined using URLs, HTTP Verbs, Content-Types ( for POST/PUT/PATCH operations)
Ideally when defining a route and a handler, a verticle or a potential message for the EventBus, all necessary code stays contained in the respective source code file. The challenge here is to register the components when the application starts. Your main Verticle doesn't know what components are in your application and manually maintain a loader code is a pain to keep in sync (besides leading to merge conflicts when working in a team).
Java annotations to the rescue! If you are new to annotations, go and check out this tutorial to get up to speed. For my project I defined three of them, with one being able to be applied multiple times.

CastleRequest

A class annotated with CastleRequest registers its handler with the EventBus, so the class can be sent over the EventBus and get encoded/decode appropriately. A special value for the annotation is "self" which indicates, that the class itself implements the MessageCodec interface
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleRequest {
  // We use value to ease the syntax
  // to @CastleRequest(NameOfCodec)
  // Special value: self = class implements the MessageCodec interface
  String value();
}

CastleRoute

This annotation can be assigned multiple times, so 2 annotation interfaces are needed
@Documented
@Repeatable(CastleRoutes.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleRoute {
  String route();
  String description();
  String mimetype() default "any";
  String method() default "any";
}

and the repeatability annotation (new with Java8):
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleRoutes {
  CastleRoute[] value();
}

CastleVerticle

Classes marked with this annotation are loaded as verticles. They can implement listeners to the whole spectrum of vert.x listening capabilities
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleVerticle {
  String type() default "worker";
  int instances() default 0;
  boolean multithreaded() default false;
}

Read more

Posted by on 2016-04-02 08:01 | Comments (0) | categories: vert.x

Now we are token - Authorization using JSON Web Token in Domino


After having Vert.x and Domino co-exist, the door opens for a few interesting applications of the new found capabilites. One sticky point in each application landscape is authentication and authorization. This installment is about authorization.
The typical flow:
  1. you access a web resource
  2. provide some identity mechanism (in the simplest case: username and password)
  3. in exchange get some prove of identity
  4. that allows you to access protected resources.
In Basic authentication you have to provide that prove every time in form of an encoded username/password header. Since that limits you to username and password, all other mechanism provide you in return for your valid credentials with a "ticket" (technically a "Bearer Token") that opens access.
I tend to compare this with a movie theater: if you want to enter the room where the movie plays you need a ticket. The guy checking it, only is interested: is it valid for that show. He doesn't care if you paid in cash, with a card, got it as a present or won in a lucky draw. Did you buy it just now, or online or yesterday, he doesn't care. He cares only: is it valid. Same applies to our web servers.
In the IBM world the standard here is an LTPA token that gets delivered as cookie. Now cookies (besides being fattening) come with their own little set of trouble and are kind of frowned upon in contemporary web application development.
The current web API token darling is JSON Web Token (JWT). They are an interesting concept since they sign the data provided. Be clear: they don't encrypt it, so you need to be careful if you want to store sensitive information (and encrypt that first).

Now how to put that into Domino?

The sequence matches the typical flow:
  1. User authenticates with credentials
  2. server creates a JWT
  3. stores JWT and credentials in a map, so when the user comes back with the token, the original credentials can be retrieved
  4. delivers JWT to caller
  5. Caller uses JWT for next calls in the header
It isn't rocket science to get that to work.

Read more

Posted by on 2016-02-25 01:37 | Comments (1) | categories: IBM Notes vert.x

The Cloud Awakening


It is a decade since Amazon pioneered cloud as a computing model. Buying ready made applications ( SaaS) enabled non-IT people to quickly accquire solutions IT, starved of budget, skills and business focus, couldn't or didn't want to deliver. Products like Salesforce or Dropbox became household brands.
But the IT departments got a slice of cloud cake too in form of IaaS. For most IT managers IaaS feels like the extension of their virtualization stragegy, just running in a different data center. They still would patch operating systems, deploy middleware, design never-to-fail platforms. They are in for an awakening.
Perched in the middle between SaaS and IaaS you find the cloud age's middleware: PaaS. PaaS is a mix that reaches from almost virtual machines like Docker to compute plaforms like IBM Bluemix runtimes, Amazon Elastic Beanstalk, Google Compute Engine all the way to the new Nano services like AWS Lambda, Google Cloud Functions or IBM OpenWhisk. Without closer inspection a middleware professional would sound a sigh of relief: middleware is here to stay.
Not so fast! What changed?
There's an old joke that claims, IBM WebSphere architecture allows to build one cluster to run the planet on and to survive mankind running. So the guiding principles are: provide a platform for everything, never go down. We spend time and time (and budget) on this premise: middleware is always running. Not in the brave new world of cloud. Instead of having one rigid structure that runs and runs, a swarm of light compute (like WebSphere Liberty) does one task each an one task can run on a whole swarm of compute. Instead of robust and stable these systems are resilient, summed up in the catch phrase: Fail fast, recover faster.
In a classical middleware environment the failure of a component is considered catastrophic (even if mitigated by a cluster), in a cloud environment: that's what's expected. A little bit like a bespoke restaurant that stays closed when the chef is sick vs. a burger joint, where one of the patty flippers not showing up is barley noticeable.
This requires a rethink: middleware instances become standardized, smaller, replaceable and repeatable. Gone are the days where one could spend a week installing a portal (as I has the pleasure a decade ago). The rethink goes further: applications can't be a "do-it-all" in one big fat junk. First they can't run on these small instances, secondly they take to long to boot, third they are a nightmare to maintain and extend. The solution is DevOps and Microservices. Your compute hits the memory or CPU limit? No problem, all PaaS platforms provide a scale out. Its fun to watch in test how classic developed software fails in these scenarios: suddenly the Singleton that controls record access isn't so single anymore. It has evil twins on each instance.
Your aiming at 99.xxx availability? The classical approach is to have multi-way clusters (which at the end don't do much if the primary member never goes down). In the PaaS area: have enough instances around. Even if an individual instance has only 90% availability (a catastrophic result in classic middleware), the swarm of runtimes at a moderate member count gets you to your triple digits after the dot. You can't guarantee that Joe will flip the burgers all the time, but you know: someone will be working on any given day.
And that's the cloud awakening: transit from solid to resilient*, from taking for granted to work with what is there - may the howling begin.

* For the record: How many monarchs, who had SOLID castles are still in charge? In a complex world resilience is the key to survival

Posted by on 2016-02-23 10:31 | Comments (1) | categories: Software

Designing a Web Frontend Development Workflow


In the the web 'you can do anything' extends to how you develop too. With every possible path open, most developers, me included, lack direction - at least initially. To bring order to the mess I will document considerations and approaches to design a development workflow that makes sense. It will be opinionated, with probably changing opinions along the way.
Firstly I will outline design goals, then tools at hand to finally propose a solution approach.

Design Goals

With the outcome in mind it becomes easier to discover the steps. The design golas don't have equal weight and depending on your preferences you might add or remove some of them
  1. Designed for the professional
    The flow needs to make life easier when you know what you are doing. It doesn't need to try to hide any complexity away. It shall take the dull parts away from the developer, so (s)he can focus on functionality and code
  2. Easy to get started
    Some scaffolding shall allow the developer to have an instant framework in place that can be modified, adjusted and extended. That might be a scaffolding tool, a clonable project or a zip file
  3. Convention over configuration
    A team member or a new maintainer needs to be able to switch between different projects and 'feel at home' in all of them. This requires (directory) structures, procedural steps and conventions to be universal. A simple example: how do you start your application locally? Will it be npm start or gulp serve or grunt serve or nodemon or ?
  4. Suitable for team development
    Both the product code and the build script need to be modular. When one developer is adding a route and the needed functionality for module A, it must not conflict with code another developer writes for module B. This rules out central routing files, manual addition of css and js files
  5. Structured by function, not code type
    A lot of the example out there put templates in on directory, controllers in another and directives yet into another. A better way is to group them by module, so files live together in a single location (Strongly influenced by a style guide)
  6. Suitable for build automation
    Strongly influenced by Bluemix Build & Deploy I grew fond of: check in code into the respective branch (using git-flow) and magically the running version appears on the dev, uat or production site. When using a Jenkins based approach that means that the build script needs to be self contained (short of having to install node/npm) and can't rely on tools in the global path
  7. React to changes
    A no-brainer: when editing a file, be it in an editor or an ide, the browser needs to reload the UI. Depending on the file (e.g. less or typescript) a compile step needs to happen. Bonus track: newly appearing files are handled too
  8. One touch extensibility
    When creating a new module or adding a new dependency there must not be a need of a "secondary" action like adding the JS or CSS definition to the index.html or manually adding a central route file to make that known
  9. Testable
    The build flow needs to have provisions to run unit tests, integration tests, code coverage reports, cshint, jslint, trace-analysis etc. Code that "oh it does work" isn't enough. Code needs to pass tests and style conventions. The tests need to be able to run in the build automation too
  10. Extensible and maintainable
    Basing a workflow on a looooong build script turns easily into a maintenance task from hell. A collection of chainable tasks/modules/files can keep that in check
  11. Minimalistic (on the output)
    Keep the network out of the user experience. A good workflow minimizes both the number of calls as well as the size of http transmission. While in development all modules need to be nicely separated, in production I want as little as possible css, html and js files. So once the UI is loaded any calls on the network are limited to application data and not application logic or layout
A very good starting point is John Papa's Yeoman generator HotTowel. It's not perfect: the layout overwrites the index.html on each new dependency/module violating goal #4 and it depends on outdated gulp modules - goal #10 (I had some fun when I tried to swap gulp-minify-css, as recommended, for gulp-cssnano and thereafter font-awesome wouldn't load since the minified css had a line comment in front of the font definition). I also don't specifically like/care (for my use case) to have the server component in the project. I usually keep them separate and just proxy my preview to my server project.

Tools at hand

There are quite some candidates: gulp, grunt, bower, bigrig, postman, yeoman, browserify, Webpack, Testling and many others. Some advocate npm scripts to be sufficient.
In a nutshell: there are plenty of options.
One interesting finding: Sam Saccone did research what overhead es6 a.k.a es2015 has over current es5. TypeScript (using browserify) performed quite well. This makes it a clear candidate, especially when looking at AngularJS 2
Next up: Baby steps

Posted by on 2016-02-20 01:50 | Comments (0) | categories: angular.js Software

Vert.x and Domino


A while ago I shared how to use vert.x with a Notes client, which ultimately let me put an Angular face on my inbox and inspired the CrossWorlds project.
I revisited vert.x which is now 3.2.1 and no longer beta. On a Domino Linux server (I don't have Windows) and on a Mac Notes client the JVM is 64 Bit, which makes the configuration easier (no -w32 switch, no download of an additional JVM). The obligatory HelloWorld verticle ran quite nicely with my manually. However it wouldn't run, when the Domino ran using a startup script.
The simple reason: to be able to access the Domino instance the vert.x verticle needs to run with the same user as the Domino server. su into the user doesn't do the trick - and of course you can't login into my server with the id that runs Domino. The solution was to turn to the expert and his outstanding Linux boot script. Using the /etc/sysconfig/rc_domino_config_* file you can simply define the behavior of your Domino startup and shutdown experience. Mine looks like this (I use "domino" as my standard user, not "notes"):

rc_domino_config_domino

LOTUS=/opt/ibm/domino
DOMINO_DATA_PATH=/home/domino/notesdata
DOMINO_SHUTDOWN_TIMEOUT=600
DOMINO_CONFIGURED="yes"
BROADCAST_SHUTDOWN_MESSAGE="yes"
DOMINO_REMOVE_TEMPFILES="yes"
DOMINO_POST_STARTUP_SCRIPT=/home/domino/scripts/launch_vertx
DOMINO_PRE_SHUTDOWN_SCRIPT=/home/domino/scripts/stop_vertx

I have installed vert.x using npm using the full stack. With node.js installed, all you need is sudo npm install vertx3-full. Of course there are more conservative ways to install, vert.x, this may be an exercise left to the reader. I didn't use any of the environment variables exposed by the standard boot script to keep it independent. The script itself is just a few lines:

launch_vertx

#!/bin/sh
# Starts the vert.x tasks that talks to Domino
DOMINO_HOME=/opt/ibm/domino/notes/latest/linux
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export VERTX_HOME=/usr/lib/node_modules/vertx3-full/vertx
export DYLD_LIBRARY_PATH=$DOMINO_HOME
export LD_LIBRARY_PATH=$DOMINO_HOME
export CLASSPATH=.:$DOMINO_HOME/jvm/lib/ext/Notes.jar:$CLASSPATH
export NOTES_ENV=SERVER
vertx start com.ibm.issc.verseu.VerseLauncher -cp /home/domino/scripts/verseu.jar --vertx-id domino

The shutdown script is short and sweet. Since I used an vertx-id, I can use that to shut down the verticle without knowing or caring for its startclass name:

stop_vertx

#!/bin/sh
# Stops the vert.x tasks that talks to Domino
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export VERTX_HOME=/usr/lib/node_modules/vertx3-full/vertx
vertx stop domino

Next step: write some actual code beyond "Hello World".
As usual YMMV

Posted by on 2016-02-18 06:07 | Comments (1) | categories: IBM Notes vert.x

Developer or Coder? -Part 1


Based on recent article I was asked: " So how would you train a developer, to be a real developer, not just a coder?". Interesting question. Regardless of language or platform (maybe short of COBOL, where you visit retirement homes a lot), each training path has large commonalities.
Below I outline a training path for a web developer. I'm quite opinionated about tools and frameworks to use, but wide open about tools to know. The list doesn't represent a recommended sequence, that would be a subject of an entire different discussion:

Read more

Posted by on 2016-02-13 12:42 | Comments (0) | categories: Software

Disecting a mail UI


At Connect 2016 Jeff announced that there will be an IBM Verse Client for Domino on premises. Domino customers are used to an high amount of flexibility, so the tempation will arise to customize the Verse experience.
However this ability is nothing that has been announced in any IBM roadmap. So any considerations are purely theoretical. What is the interface of Verse made of. WIthout reverse engineering, just by looking at it, one could come to the following conclusion:
A possible structure of the VerseUI
This looks quite managable and lean. Now if all these components are Widgets (if build in Dojo) or Directives (if build in AngularJS) or Components (an upcoming HTML standard) it wouldn't be too hard to envision customization abilities.
Using the excellent Tilt Extension for Firefox one can visualize how the different web clients are structured.

Read more

Posted by on 2016-02-13 01:24 | Comments (0) | categories: IBM Notes

The quick and dirty Domino Cloudant export


Moving data out of Domino never has been hard with all the APIs available. The challenge always has been: move them where? Ignoring for a second all security considerations, the challenge is to find a target structure that matches the Domino model. Neither flat table storage nor RDBMS fit that very well.
A close contender is MongoDB which is used in one compelling Notes retirement offering. However the closest match in concept and structure is Apache CouchDB, not surprisingly due to its heritage and origin.
It is maintained by a team led by the highly skilled Jan Lehnardt and of course there are differences to Notes.
But the fit is good enough. Using the lightweight Java library Ektorp exporting a set of documents from Notes to CouchDB is a breeze. The core class is a simple mapping of a Notes document to a JSON structure:
package com.notessensei.export;

import java.util.HashMap;
import java.util.Map;
import java.util.Vector;

import lotus.domino.Document;
import lotus.domino.Item;
import lotus.domino.NotesException;

public class NotesJsonDoc {
	public static String ID_FIELD = "_id";
	public static String REV_FIELD = "_rev";
	
	private Map<String, String> content = new HashMap<String, String>();
	
	@SuppressWarnings("rawtypes")
	public NotesJsonDoc(Document source) throws NotesException {
		Vector allItems = source.getItems();
		for (Object itemObject : allItems) {
			Item item = (Item) itemObject;
			this.content.put(item.getName(), item.getText());
		}
		this.content.put(ID_FIELD, source.getUniversalID());
	}
	
	public Map<String, String> getContent() {
		return this.content;
	}
	
	public NotesJsonDoc setRevision(String revision) {
		this.content.put(REV_FIELD, revision);
		return this;
	}
	
	public String getId() {
		return this.content.get(ID_FIELD);
	}
}


Read more

Posted by on 2016-01-21 01:45 | Comments (1) | categories: Bluemix