wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

GIT deploy your static sites - Part 1


When you, in principal, like the idea to serve SPA from the http server, you will encounter the pressing question: where do babies come from how to get your application deployed (applies to nodeJS applications too, but that is part of another story) onto the http server?
On Bluemix that's easy: just use a Pipeline
For mere mortal environments there are several options:
  • Just FTP them - insecure unless you use sftp/scp. Big pain here: deleting obsolete files
  • Setup rsync. When done with a ssh certificate can be reasonably automated. Same pain applies: deleting obsolete files
  • Use a GIT based deployment. This is what I will discuss further
I like a repository based deployment since it fits nicely into a development based workflow. The various git gui tools provide insight what has changed between releases and if things go wrong, you can roll back to a previous version or you can wipe data and reestablish them from the repository. Designing the flow, I considered the following constraints:
  • The repositories would sit on the web server
  • Typically a repository would sit in .git inside the site directory. While you could protect that with access control, I decided I don't want to have it in separate directories
  • When pushing to the master branch, the site should get updated, not on any other branch. You can extend my approach to push other branches to other sites - so you get a test/demo/staging capability
  • Setting up a new site should be fast and reliable (including https - but that's part 2)
The "secret" ingredients here are git-hooks, in specific the post-receive. Hooks, in a nutshell are shell scripts that are triggered by events that happen to a git environment. I got inspired by this entry but wanted to automate the setup.

Read more

Posted by on 2017-01-12 12:26 | Comments (0) | categories: nginx WebDevelopment

Serving Single Page Applications with Domino


Single Page Applications (SPA) are all the rage. They get developed with AngularJS, ReactJS or {insert-your-framework-of-choice}. Those share a few communialities:
  • the application is served by a static web server
  • data is provided via an API, typically reading/writing JSON via REST or graph
  • authentication is often long lasting (remember me...) based on JWT
  • authentication is highly flexible: login with {facebook|google|linkedin|twitter} or a corporate account. Increasingly 2 factor authentication is used (especially in Europe)
How does Domino fit into the picture with its integrated http stack, authentication and database? The answer isn't very straight forward. Bundling components creates ease of administration, but carries the risk that new technologies are implemented late (or not at all). For anything internet facing that's quite some risk. So here is what I would do:
Red/Green Zone layout for Domino

Read more

Posted by on 2017-01-11 04:14 | Comments (2) | categories: IBM Notes XPages

Lessons from Project OrangeBox


Project OrangeBox, the Solr free search component, was launched after the experiments with Java8, Vert.x and RxJava in TPTSNBN concluded. With a certain promise we were working on a tight dead line and burned way more midnight oil than I would have wished for.

Anyway, I had the opportunity to work with great engineers and we shipped as promised. There are quite some lesson to be learned, here we go (in no specific order):

  • Co-locate
    The Verse team is spread over the globe: USA, Ireland, Belarus, China, Singapore and The Philippines. While this allows for 24x7 development, it also poses a substantial communications overhead. We made the largest jumps in both features and quality during and after co-location periods. So any sizable project needs to start and be interluded with co-location time. Team velocity will greatly benefit
  • No holy cows
    For VoP we slaughtered the "Verse is Solr" cow. That saved the Domino installed base a lot of investments in time and resources. Each project has its "holy cows": Interfaces, tool sets, "invaluable, immutable code", development pattern, processes. You have to be ready to challenge them by keeping a razor sharp focus on customer success. Watch out for Prima donnas (see next item)
  • No Prima Donnas
    As software engineers we are very prone to perceive our view of the world as the (only) correct one. After all we create some of it. In a team setting that's deadly. Self reflection and empathy are as critical to the success as technical skills and perseverance.
    Robert Sutton, one of my favourite Harvard authors, expresses that a little bolder.
    In short: A team can only be bigger than the sum of its members, when the individuals see themselves as members and are not hovering above it
  • Unit test are overrated
    I hear howling, read on. Like "A journey of a thousand miles begins with a single step" you can say: "Great software starts with a Unit Test". Begins, not: "Great software consists of Unit Tests". A great journey that only has steps ends tragically in death by starvation, thirst or evil events.
    Same applies to your test regime: You start with Unit tests, write code, pass it on to the next level of tests (module, integration, UI) etc. So unit tests are a "conditio sine qua non" in your test regime, but in no way sufficient
  • Test pyramid and good test data
    Starting with unit tests (we used JUnit and EasyMock), you move up to module tests. There, still written in JUnit, you check the correctness of higher combinations. Then you have API test for your REST API. Here we used Postman and its node.js integration Newman.
    Finally you need to test end-to-end including the UI. For that Selenium rules supreme. Why not e.g. PhantomJS? Selenium drives real browsers, so you can (automate) test against all rendering engines, which, as a fact of the matter, behave unsurprisingly different.
    One super critical insight: You need a good set of diverse test data, both expected and unexpected inputs in conjunction with the expected outputs. A good set of fringe data makes sure you catch challenges and border conditions early.
    Last not least: Have performance tests from the very beginning. We used both Rational Performance Tester (RPT) and Apache JMeter. RPT gives you a head start in creating tests, while JMeter's XML file based test cases were easier to share and manipulate. When you are short of test infrastructure (quite often the client running tests is the limiting factor) you can offload JMeter tests to Blazemeter or Flood.io
  • Measure, measure, measure
    You need to know where your code is spending its time in. We employed a number of tools to get good metrics. You want to look at averages, min, max and standard deviations of your calls. David even wrote a specific plugin to see the native calls (note open, design note open) or Java code would produce (This will result in future Java API improvements). The two main tools (besides watching the network tab in the browser) were New Relic with deep instrumentation into our Domino server's JVM and JAMon collecting live statistics (which you can query on the Domino console using show stats vop. Making measurements a default practise during code development makes your life much easier later on
  • No Work without ticket
    That might be the hardest part to implement. Any code item needs to be related to a ticket. For the search component we used Github Enterprise, pimped up with Zenhub.
    A very typical flow is: someone (analyst, scrum master, offering manager, project architect, etc.) "owns" the ticket system and tickets flow down. Sounds awfully like waterfall (and it is). Breaking free from this and turn to "the tickets are created by the developers and are the actual standup" greatly improves team velocity. This doesn't preclude creation of tickets by others, to fill a backlog or create and extend user stories. Look for the middle ground.
    We managed to get Github tickets to work with Eclipse which made it easy to create tickets on the fly. Once you are there you can visualize progress using Burn charts
  • Agile
    "Standup meeting every morning 9:30, no exception" - isn't agile. That's process strangling velocity. Spend some time to rediscover the heart of Agile and implement that.
    Typical traps to avoid:
    • use ticket (closings) as (sole) metric. It only discourages the us of the ticket system as ongoing documentation
    • insist on process over collaboration. A "standup meeting" could be just a Slack channel for most of the time. No need to spend time every day in a call or meeting, especially when the team is large
    • Code is final - it's not. Refactoring is part of the package - including refactoring the various tests
    • Isolate teams. If there isn't a lively exchange of progress, you end up with silo code. Requires mutual team respect
    • Track "percent complete". This lives on the fallacy of 100% being a fixed value. Track units of work left to do (and expect that to eventually rise during the project)
    • One way flow. If the people actually writing code can't participate in shaping user stories or create tickets, you have waterfall in disguise
    • Narrow user definitions and stories: I always cringe at the Scrum template for user stories: "As a ... I want ... because/in order to ...". There are two fallacies: first it presumes a linear, single actor flow, secondly it only describes what happens if it works. While it's a good start, adopting more complete use cases (the big brother of user stories) helps to keep the stories consistent. Go learn about Writing Effective Use Cases. The agile twist: A use case doesn't have to be complete to get started. Adjust and complete it as it evolves. Another little trap: The "users" in the user stories need to include: infrastructure managers, db admins, code maintainer, software testers etc. Basically anybody touching the app, not just final (business) users
    • No code reviews: looking at each other's code increases coherence in code style and accellerates bug squashing. Don't fall for the trap: productivity drops by 50% if 2 people stare at one screen - just the opposite happens
  • Big screens
    While co-located we squatted in booked conference rooms with whiteboard, postit walls and projectors. Some of the most efficient working hours were two or three pairs of eyes walking through code, both in source and debug mode. During quiet time (developers need ample of that. The Bose solution isn't enough), 27" or more inches of screen real estate boost productivity. At my home office I run a dual screen setup with more than one machine running (However, I have to admit: some of the code was written perched into a cattle class seat travelling between Singapore and the US)
  • Automate
    We used both Jenkins and Travis as our automation platform. The project used Maven to keep the project together. While Maven is a harsh mistress spending time to provide all automation targets proved invaluable.
    You have to configure your test regime carefully. Unit test should not only run on the CI environment, but on a developers workstation - for the code (s)he touches. A full integration test for VoP on the other hand, runs for a couple of hours. That's the task better left to the CI environment. Our Maven tasks included generating the (internal) website and the JavaDoc.
    Lesson learned: setting up a full CI environment is quite a task. Getting the repeatable datasets in place (especially when you have time sensitive tests like "provide emails from the last hour") can be tricky. Lesson 2: you will need more compute than expected, plan for parallel testing
  • Ownership
    David owned performance, Michael the build process, Raj the Query Parser, Christopher the test automation and myself the query strategy and core classes. It didn't mean: being the (l)only coder, but feeling responsible and taking the lead in the specific module. With the sense of ownership at the code level, we experienced a number of refactoring exercises, to the benefit of the result, that would never have happened if we followed Code Monkey style an analyst's or architect's blueprint.
As usual YMMV

Posted by on 2017-01-03 09:48 | Comments (3) | categories: IBM Notes Software Continuous integration

The totally inofficial guide to Verse on Premises


Now that CNGD8ML is upon us, it is story time. Read about the why, who, what and what to watch out for.

To successfully deploy Verse, make sure to carefully read and implement the installation instructions. The availability of Verse makes Domino the most versatile eMail platform around, offering you the choice of: Notes Client, Outlook, POP2, IMAP4, iNotes, Verse, iOS, Android. Anywhay, here we go:

The back story

Verse on premises was a long (out)standing promise to the IBM customer base. Not everybody is ready to embrace the cloud, but interested in the new way to work. In SmartCloud Notes, the backend for Verse in the Cloud, all search is powered by Apache SOLR. If Verse got delivered as is, that would have required substantial hardware and skill investments for the on-premises customers.

So I made a bet with Michael Alexander, whom I worked with on TPTSNBN, that we could use standard Domino capabilities, not requiring Solr. Based on prototypes with vert.x and Java8 we gained confidence and got the go ahead to build the search component as OSGi plug-in (in Java6). So the search part (not the UI or other functionality) is on me.

The team(s)

There were two distinct teams working on the delivery of Verse on Premises (VoP): The core Verse team, that owns UI, functionality and features for both cloud and on premises and the search plugin team responsible to replace the Solr capabilities with native Domino calls.
The former is rather large, distributed between the US, Ireland and China. The later was led by the distinguished engineer David Byrd and just a few core coding members: David, Michael, Christopher, Raj and myself.
We were supported by a team of testers in Belarus and the Philippines. The test teams wrote hundreds of JUnit and Postman tests, just for the search API.

The Orangebox

Each project needs a good code name. The original Verse code name was Sequoia, which is reflected in the name of the plugins for core and UI functionality.

The search component, not being part of RealVerse™, needed a different name. In an initial high level diagram, outlining the architecture for management, the search component was drawn as an orange box. Since we "just" had to code "the orange box". The name stuck and led to our code name "Project OrangeBox" (PoB).
The inofficial Project Orange Box Logo
You can find Orangebox and POB in multiple places (including notes.ini variables and https calls the browser makes). So now you know where it is coming from.


Read more

Posted by on 2016-12-30 08:17 | Comments (b) | categories: IBM Notes

Domino meets RXJava


Verse on premises (VoP) is nearing its second beta release and fellow Notes experts are wondering if they need to install Apache Solr as part of the VoP deployment. There was a lengthy, high quality discussion and quite some effort evaluating alternatives. In conclusion it was decided to deliver the subset of Solr capabilities needed for VoP as series of OSGi plugins to the Domino server. The project was formed out of the experience with ProjectCastle, which continues as Project OrangeBox to deliver these plugins. In VoP you might encounter one or the other reference to PoB, so now you know where it comes from.
One of the design challenges to solve was to emulate the facet results of the Solr search engine. I build some prototypes and finally settled on the use of RxJava.
RxJava is a member of the ReactiveX programming family, which is designed around the Observer pattern, iterators and functional programming. Check out the main site to get into the groove.
The task at hand is to convert something Domino (a ViewNavigator, a DocumentCollection or a Document) into something that emits subscribable events. I started with turning a document into an NotesItem emitter. Purpose of this was the creation of lighweight Java objects that contain the items I'm interested in. Since Domino's Java has special needs and I couldn't use the ODA, special precaution was needed.
There are plenty of methods to create an Observable and on first look Create looks most promising, but it left the question of recycling open. Luckily there is the Using method that creates a companion object that lives along the Observable and gets explicitly called when the Observable is done. To create the NotesItem emitting Observable I settled on the From method with an Iterable as source. The moving parts I had to create were class DocumentSource implements Iterable<Item> and class ItemIterator implements Iterator<Item>
Why Reactive? In a nutshell: a source emits data and any number of subscribers can subscribe to. Between the emission and subscription any number of filters, modifiers and aggregators can manipulate the data emitted. Since each of them lives in its own little class, testing and composition become very easy. Let's look at an example:
docSource.getItemStream(session).filter(nameFilter).map(toPobItem).map(nameMapper).subscribe(new ItemAdder());
You almost can read this aloud: " The source emits a stream of items, they get filtered by Name, then converted into another Java object (PobItem) and renamed before added to the subscriber.". In a different case you might want to collect all entities (users, groups, roles) that have access to a document, you migh create a "readerAuthorFilter". The individual classes are very easy to test. E.g. the Name filter looks like this:
// requiredFields is is a Collection<String> of NotesItem names to include or exclude
ItemNameFilter nameFilter = new ItemNameFilter(requiredFields, ItemNameFilter.FilterMode.INCLUDE);

public class ItemNameFilter implements Func1<Item, Boolean> {

    public enum FilterMode {
        INCLUDE, EXCLUDE;
    }

    private final Logger      logger      = Logger.getLogger(this.getClass().getName());
    private final Set<String> itemNameSet = new HashSet<String>();
    private final FilterMode  filterMode;

    /**
     * Flexible include or exclude
     * 
     * @param itemNames
     *            Collection of Names to include or exclude
     * @param filterMode
     *            INCLUDE or EXCLUDE
     */
    public ItemNameFilter(Collection<String> itemNames, FilterMode filterMode) {
        this.filterMode = filterMode;
        this.updateItemNames(itemNames);
    }

    public ItemNameFilter(Collection<String> itemNames) {
        this.filterMode = FilterMode.INCLUDE;
        this.updateItemNames(itemNames);
    }

    private void updateItemNames(Collection<String> itemNames) {
        this.itemNameSet.addAll(itemNames);
    }

    @Override
    public Boolean call(Item incomingItem) {
        // Include unless proven otherwise
        boolean result = true;
        try {
            String itemName = incomingItem.getName();
            boolean inList = this.itemNameSet.contains(itemName.toLowerCase());
            result = (inList && this.filterMode.equals(FilterMode.INCLUDE));
        } catch (NotesException e) {
            this.logger.log(Level.SEVERE, e);
            result = false;
        }
        return result;
    }
}


Read more

Posted by on 2016-09-13 01:33 | Comments (1) | categories: IBM Notes

Metawork, nobody is capable but all participate grudgingly


This article is a translation/paraphrase of Professor Gunter Dueck's original post titled DD265: Metawork – keiner kann’s, aber alle machen ärgerlich mit (Mai 2016). Professor Dueck's philosophy resonates with me, so I'd like to make his thoughts available to a wider audience. Bear with my Gerlish. Remarks in brackets aren't part of the original text and are either my comment, extension or explanation. Here we go:

Metawork is your own effort to organize work (your's and other's), not performing the actual effort. It is about coordinating your contributions, more often than not, across multiple projects. This includes managing decisions (through eMail) and the communicate with all stakeholders. E.g. you can use efficient (Dueck used the word "fertile", but I'm not sure if that has the same resonance in English) meetings to establish the approach how to structure and execute working together. Over time a corporate culture emerges where a common good metawork forms the enabler for efficient execution of the core work (we'll learn another term for this just below).

In reality, however, there are quarrels in meetings, about who does what. Conflicts surface, everyone speaks their minds unfiltered, meetings drag on and on. People get a grudge and are annoyed and are left with the feeling to have wasted valuable time, they won't get back. Dueck checked the web, what it has to say about metawork. His favorite place is the the Urban Dictionary  where ordinary people contribute to difficult definitions and provide lots of suggestions. The best of them are the odd ones.

You rant online to be overburdened with unproductive responsibilities, unable to get anything done. People share that in a development project staffed with eight people only two of them code. The rest warms seats in the meeting room and is first in line for promotion if the project is successful. What a mess!

Hmmm. So your own work is productive, anything else a distraction. Not thinking about what your project members see as their "productive work". An example: If the developers miss a deadline, it generates a lot of distraction for the rest of the team. "Everything would be perfect if the coders would work properly! We have to integrate into SAP, everybody is waiting. What a cluster f**k (that's the closest cultural equivalent to "Supergau" I could think of)" - The two developers retort: "You could have contributed code, instead of babbling in all that meeting, we would be done by now"

This a clear indicator, Dueck sees in all corporations, that the different project members have no understanding of the tasks of their fellow members. If they do know them, they doubt the importance or usefulness. One's own work is important, anything else is a distraction. Others only interrupt. Then they quarrel in in meetings

  Why oh why? All are well trained for their own tasks and complete them quite well. However roughly none have been educated on metawork: how to get organized and collaborate. They do some of it every day, limping along without having or wanting to learn about it. It never had been a topic. They bitch the whole day about the drag of metawork without being able to fully grasp it, lacking a word for it, not aware of the term metawork. Managers and project leaders follow the prevalent methodologies and press forward. More often than not, they aren't aware of metawork. The manage or lead as "their own work", but hardly spend a thought on the work as a whole

Even when managers would know how to coordinate well and fuse the parts to a whole, how to deal with unknowns and avoid conflict - it falls short when their reports have no clue what is metawork?

When team members only spend half of their time with "their own work (e.g. programming)" and  are irate about the "stolen" time spend in meetings otherwise, they haven't understood the very nature of work - or metawork is done mind-boggling bad.

Metawork is about the principles and foundation of performing work. Those who haven't given it a thought, bungle in each project, wondering how it could work. Every conflict is new, different and unique. Each project has its own singular surprises. What a madhouse! Lots of literature reenforces that point of view.
However that's because one only focuses on their own tasks at hand, and never learn to pay respect of the significant other contributions.

Dueck suggested in his book „Verständigung im Turm zu Babel“ (Communicate in the tower of Babel) and his blog to contrast meta communication and mesa communication. „Mesa“ is greek meaning „inside“, „meta“ is like „and beyond“. In the context of work „mesawork“ would be the individual task at hand and „metawork“ anything beyond that. Dueck sees it over and over again: Nobody is really good at meta communication, anybody communicates off their chests. Similarly we are good at mesawork but bemoan the complexity of the world, since we can't relate to metawork.

Shall we leave it that way? Half of our time being experts, half of it clueless N00bs? Isn't the balance tipping towards cluelessness, since the need for metawork is raising in a increasingly complex world? How about you? Happy to continue only fretting?

Posted by on 2016-05-13 07:41 | Comments (1) | categories: After hours

Mach Dich auf die Socken!


A common requirement in corporate systems is "let me know when something is going on". In Notes we use "On document creation or update" triggered agents to process such events. To let external systems know about such a change R8 introduced the web service client. This works well in distributed system, but requires quite some work on both ends. In a recent case I had to optimize the communication between Domino and a task running on the same machine. The existing solution was polling the Domino API in short intervals for updates. Something I would call donkey mode. Sockets to the rescue. A few lines of Java in a triggered agent puts an end to donkey mode and provides the receiving end with all it needs in time:
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.Socket;

import lotus.domino.AgentBase;
import lotus.domino.AgentContext;
import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.DocumentCollection;
import lotus.domino.NotesException;
import lotus.domino.Session;

import com.issc.castle.domino.Utils;

public class JavaAgent extends AgentBase {

	public static String	sockethost	= "127.0.0.1";
	public static int		socketport	= 1234;

	public void NotesMain() {
		Session session = null;
		AgentContext agentContext = null;
		Database db = null;
		DocumentCollection dc = null;
		Document doc = null;

		// The socket elements
		DataOutputStream out = null;
		BufferedReader in = null;
		Socket socketClient = null;
		try {
			// Get the Notes parts
			session = getSession();
			agentContext = session.getAgentContext();
			db = agentContext.getCurrentDatabase();
			dc = agentContext.getUnprocessedDocuments();

			// Get the socket
			socketClient = new Socket(sockethost, socketport);
			in = new BufferedReader(new InputStreamReader(socketClient.getInputStream()));
			out = new DataOutputStream(socketClient.getOutputStream());

			doc = dc.getFirstDocument();
			while (doc != null) {
				Document nextDoc = dc.getNextDocument(doc);
				this.signalOneDocument(doc, in, out);
				Utils.shred(doc);
				doc = nextDoc;
			}

			// Mark them done
			dc.updateAll();
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			Utils.shred(doc, dc, db, agentContext, session);
			// Close them
			try {
				if (out != null) {
					out.close();
				}
				if (in != null) {
					in.close();
				}
				if (socketClient != null) {
					socketClient.close();
				}
			} catch (IOException e) {
				e.printStackTrace();
			}
		}
	}

	private void signalOneDocument(final Document doc, final BufferedReader in, final DataOutputStream out) {
		try {
			String notesURL = doc.getNotesURL();
			out.writeBytes(notesURL);
			out.writeBytes("|");
		} catch (NotesException e) {
			e.printStackTrace();
		} catch (IOException e) {
			e.printStackTrace();
		}

	}
}

No libraries to load, the only utility function used is Utils.shred() which is a error wrapped recycle call.
As usual YMMV
(Bad German pun in the title)

Posted by on 2016-05-09 10:46 | Comments (0) | categories: IBM Notes Java

Annotations to supercharge your vert.x development


ProjectCastle is well under way. Part of it, the part talking to Domino, is written in Java8 and vert.x. With some prior experience in node.js development vert.x will look familiar: base on event loop and callbacks, you develop in a very similar way. The big differences: vert.x runs on the JVM8, it is by nature of the JVM multi-threaded, features an event bus and is polyglot - you can develop in a mix of languages: Java, JavaScript, Jython, Groovy etc.
This post reflects some of the approaches I found useful developing with vert.x in Java. There are 3 components which are core to vert.x development:
  • Verticle

    A unit of compute running with an event loop. Usually you start one Verticle (optional with multiple instances) as your application, but you might want/need to start additional ones for longer running tasks. A special version is the worker verticle, that runs from a thread pool to allow execution of blocking operations
  • EventBus

    The different components of your application message each other via the EventBus. Data send over the EventBus can be a String, a JsonObject or a buffer. You also can send any arbitrary Java class as message once you have defined a codec for it
  • Route

    Like in node.js a vert.x web application can register routes and their handlers to react on web input under various conditions. Routes can be defined using URLs, HTTP Verbs, Content-Types ( for POST/PUT/PATCH operations)
Ideally when defining a route and a handler, a verticle or a potential message for the EventBus, all necessary code stays contained in the respective source code file. The challenge here is to register the components when the application starts. Your main Verticle doesn't know what components are in your application and manually maintain a loader code is a pain to keep in sync (besides leading to merge conflicts when working in a team).
Java annotations to the rescue! If you are new to annotations, go and check out this tutorial to get up to speed. For my project I defined three of them, with one being able to be applied multiple times.

CastleRequest

A class annotated with CastleRequest registers its handler with the EventBus, so the class can be sent over the EventBus and get encoded/decode appropriately. A special value for the annotation is "self" which indicates, that the class itself implements the MessageCodec interface
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleRequest {
  // We use value to ease the syntax
  // to @CastleRequest(NameOfCodec)
  // Special value: self = class implements the MessageCodec interface
  String value();
}

CastleRoute

This annotation can be assigned multiple times, so 2 annotation interfaces are needed
@Documented
@Repeatable(CastleRoutes.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleRoute {
  String route();
  String description();
  String mimetype() default "any";
  String method() default "any";
}

and the repeatability annotation (new with Java8):
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleRoutes {
  CastleRoute[] value();
}

CastleVerticle

Classes marked with this annotation are loaded as verticles. They can implement listeners to the whole spectrum of vert.x listening capabilities
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface CastleVerticle {
  String type() default "worker";
  int instances() default 0;
  boolean multithreaded() default false;
}

Read more

Posted by on 2016-04-02 08:01 | Comments (0) | categories: vert.x

Now we are token - Authorization using JSON Web Token in Domino


After having Vert.x and Domino co-exist, the door opens for a few interesting applications of the new found capabilites. One sticky point in each application landscape is authentication and authorization. This installment is about authorization.
The typical flow:
  1. you access a web resource
  2. provide some identity mechanism (in the simplest case: username and password)
  3. in exchange get some prove of identity
  4. that allows you to access protected resources.
In Basic authentication you have to provide that prove every time in form of an encoded username/password header. Since that limits you to username and password, all other mechanism provide you in return for your valid credentials with a "ticket" (technically a "Bearer Token") that opens access.
I tend to compare this with a movie theater: if you want to enter the room where the movie plays you need a ticket. The guy checking it, only is interested: is it valid for that show. He doesn't care if you paid in cash, with a card, got it as a present or won in a lucky draw. Did you buy it just now, or online or yesterday, he doesn't care. He cares only: is it valid. Same applies to our web servers.
In the IBM world the standard here is an LTPA token that gets delivered as cookie. Now cookies (besides being fattening) come with their own little set of trouble and are kind of frowned upon in contemporary web application development.
The current web API token darling is JSON Web Token (JWT). They are an interesting concept since they sign the data provided. Be clear: they don't encrypt it, so you need to be careful if you want to store sensitive information (and encrypt that first).

Now how to put that into Domino?

The sequence matches the typical flow:
  1. User authenticates with credentials
  2. server creates a JWT
  3. stores JWT and credentials in a map, so when the user comes back with the token, the original credentials can be retrieved
  4. delivers JWT to caller
  5. Caller uses JWT for next calls in the header
It isn't rocket science to get that to work.

Read more

Posted by on 2016-02-25 01:37 | Comments (1) | categories: IBM Notes vert.x

The Cloud Awakening


It is a decade since Amazon pioneered cloud as a computing model. Buying ready made applications ( SaaS) enabled non-IT people to quickly accquire solutions IT, starved of budget, skills and business focus, couldn't or didn't want to deliver. Products like Salesforce or Dropbox became household brands.
But the IT departments got a slice of cloud cake too in form of IaaS. For most IT managers IaaS feels like the extension of their virtualization stragegy, just running in a different data center. They still would patch operating systems, deploy middleware, design never-to-fail platforms. They are in for an awakening.
Perched in the middle between SaaS and IaaS you find the cloud age's middleware: PaaS. PaaS is a mix that reaches from almost virtual machines like Docker to compute plaforms like IBM Bluemix runtimes, Amazon Elastic Beanstalk, Google Compute Engine all the way to the new Nano services like AWS Lambda, Google Cloud Functions or IBM OpenWhisk. Without closer inspection a middleware professional would sound a sigh of relief: middleware is here to stay.
Not so fast! What changed?
There's an old joke that claims, IBM WebSphere architecture allows to build one cluster to run the planet on and to survive mankind running. So the guiding principles are: provide a platform for everything, never go down. We spend time and time (and budget) on this premise: middleware is always running. Not in the brave new world of cloud. Instead of having one rigid structure that runs and runs, a swarm of light compute (like WebSphere Liberty) does one task each an one task can run on a whole swarm of compute. Instead of robust and stable these systems are resilient, summed up in the catch phrase: Fail fast, recover faster.
In a classical middleware environment the failure of a component is considered catastrophic (even if mitigated by a cluster), in a cloud environment: that's what's expected. A little bit like a bespoke restaurant that stays closed when the chef is sick vs. a burger joint, where one of the patty flippers not showing up is barley noticeable.
This requires a rethink: middleware instances become standardized, smaller, replaceable and repeatable. Gone are the days where one could spend a week installing a portal (as I has the pleasure a decade ago). The rethink goes further: applications can't be a "do-it-all" in one big fat junk. First they can't run on these small instances, secondly they take to long to boot, third they are a nightmare to maintain and extend. The solution is DevOps and Microservices. Your compute hits the memory or CPU limit? No problem, all PaaS platforms provide a scale out. Its fun to watch in test how classic developed software fails in these scenarios: suddenly the Singleton that controls record access isn't so single anymore. It has evil twins on each instance.
Your aiming at 99.xxx availability? The classical approach is to have multi-way clusters (which at the end don't do much if the primary member never goes down). In the PaaS area: have enough instances around. Even if an individual instance has only 90% availability (a catastrophic result in classic middleware), the swarm of runtimes at a moderate member count gets you to your triple digits after the dot. You can't guarantee that Joe will flip the burgers all the time, but you know: someone will be working on any given day.
And that's the cloud awakening: transit from solid to resilient*, from taking for granted to work with what is there - may the howling begin.

* For the record: How many monarchs, who had SOLID castles are still in charge? In a complex world resilience is the key to survival

Posted by on 2016-02-23 10:31 | Comments (1) | categories: Software