wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

By Date: February 2014

What is your investment in Notes applications - revisited


About 2 years ago I asked: " What's your investment in Notes Applications?" and provided steps using DXLMagic to base the answer to that question on evidence rather opinion. With the arrival of version control capabilities in Domino Designer that task became easier (or different - your take). Revisiting the code base I devised new requirements:
  • The analysis should run against the On-Disk-Project rather than the NSF. The simplified assumption here: you deselected the "Use Binary DXL for source control operations"
    deselect binary DXL
    The "binary" format is stored in BASE64, so it wouldn't be impossible to decode, but quite some work to change the parsing, since the tags change too
  • The analysis code should have no Notes dependencies, so I can run on an Integration Server for continious measurements
  • It should be able to analyse a bunch of databases in one go
The result of this new requirements is cocomo.jar that allows you to simply run java -jar cocomo.jar DirectoryAboveYourOnDiskProjects ReportFile.csv.
Clarification (thx Stefan): DirectoryAboveYourOnDiskProjects is the location where all the directories of your individual On Disk projects are. So if you have C:\ODP\App1, C:\ODP\App2 and C:\ODP\OneMoreApp, you only run the tool once with java -jar cocomo.jar C:\ODP report.csv

The csv file contains one row per analysed database, with the last column being the Line of Code total for that application. That is the number you fill into the CoCoMo Tool in the New field. Then add Cost per Person-Month (Dollars) and hit Calculate. You get very enlightening results. After that, feel free to play with the settings of the tool.
On its first run the cocomo.jar writes out a properties file that defines what columns go into your csv report file, you can manipulate them as you deem fit. All the other settings are inside the jar, to peak into them, have a look at the source.
When your On Disk Projects are in different places, you can call the app with a 3rd parameter that points to a plain text file that simply lists directory names, one per line to be analysed: java -jar cocomo.jar DirectoryAboveYourOnDiskProjects ReportFile.csv FileWithDirectoryNamesOnePerLine

As usual: YMMV

Posted by on 26 February 2014 | Comments (2) | categories: IBM Notes

Numbers are numbers, you have to see it! - Selenium edition


When looking at performance data and comparisons, numbers are just that: " X is 23% faster than y" is a statement few people can actually visualize. You have to see it in action to get a feel for the real difference. Applies to vehicles and web sites in the same manner.
Instinctively one would opt for a load test to see loading speeds, but after checking options I found a functional test will do just fine. My tool of choice here is Selenium WebDriver. It can be easily integrated into JUnit test and with a little effort even automatically record the whole session. So here is my test plan:
  1. Get a list of 2-3 URLs from the command line
  2. Open a new clean browser session for the number of URLs fetched
  3. Position the browser windows next to each other, so each has the same size
  4. Wait for the user hitting enter to start (so (s)he can adjust window sizes or resequence them)
  5. Spin of one thread for each URL to load the page
  6. Wait again
  7. Tear down the setup
Sounds much more complicated than it actually is. The whole code is about one hundred lines and can be easily extended to do more things. Selenium provides an IDE that assists to some extend getting started. I like Selenium for a number of reasons:
  • Can be fully integrated in JUnit tests
  • No new language to learn (it has bindings for quite some)
  • Functional test can be done without a specific browser using a generic web driver
  • Provides visible browser drivers (for Firefox and others) that by default use a new clean profile (no cache, no cookies)
  • Rich community and tons of examples
  • Can be used in your own code or delegated to cloud based testing service
  • Can test JavaScript, Ajax, Drag & Drop and Mobile
I run the code from a command line window, 3 lines high, perched at the bottom of my screen, so it doesn't get into the way of the big browser windows. Here comes the code:

Read more

Posted by on 15 February 2014 | Comments (0) | categories: Software

Domino Development - Back to Basics - Part 7: Map Reduce Domino Style


One of the odd things about Domino is the way things are called. It is Memo instead of eMail, Replication instead of Sync, Note store/Document store instead of NoSQL etc. The simple reason is the fact, that all these capabilities predate the more common terms and there was no label for them when Notes had them. In NoSQL circles MapReduce is a hot topic. Introduced by Google, now part of Apache Hadoop it can be found in MongoDB, Apache CouchDB and others. Interestingly it seems that short of Hadoop the mapping doesn't run distributed, but on a single server.
So what about Domino? Holding up the tradition of odd naming, the equivalent of MapReduce is Categorized View where View is the Map component and Categorize is the Reduce capability. The mapping part gets accomplished using the venerable @Formula (pronounced AT-formula) language which got inspired by LISP. If you ever wondered why you had to sort out all "yellow triangles" in kindergarden (a.k.a set theory), once you get started with @Formulas, that knowledge comes in handy. You mainly apply transformations using a set of @functions to lists of values (even a single value is a list, a list with one member). While there is a Loop construct, staying with sets/lists is highly efficient.
In its simplest case a column in a view is simply the name of the Notes item in a document (which will return an empty value if no such item exists). The formulas are used in 2 places: defining the column values of the data that is returned and selecting the documents to be included in the composition of the view. The default selection is SELECT @All which lists all documents in that database regardless of form used (if any) to create them or items contained in them. This is very different from tables/views in an RDBMS where each line item has the same content. Typically Notes application designers make sure, that all documents share a common set of items, so when combined in a view something useful can be shown. Often these item names are lifted from the mail template: Subject, Categories, From, Body.
Sorting resulting values is done as a property of the column (second tab in the property box), where sorting is from left to right. In classic Domino you can find views where specific columns are listed twice: once in the position where the user wants to see them, secondly in the sequence needed to sort, but with the attribute "hide this column". In XPages this is no longer necessary, since the view control allows to position columns in a different sequence than the underlying view.
When you categorize a column, typically starting at the first column, Domino offers a set of functions for the other columns to compute reduce values:
Reduce in Domino
You can access the values using ?OpenView&CollapseAll in Domino's REST API or in code using a ViewNavigator. A categorized view always starts with a category, so you begin with a .getFirst() followed by .GetNextSibling() for one level categories or .GetNextCategory() if you have multiple levels of categories. This capability helps when you aggregate data for graphics or pivot tables or [insert your idea here]

Posted by on 13 February 2014 | Comments (3) | categories: IBM Notes XPages

The perception of emptiness


In an ACI Singapore class I'm studying Master Shantideva's famous work " A Guide to the Bodhisattva's Way of Life" (Bodhicaryavatara). A recurring tenant in his, or any other deep Buddhist teaching, is the concept, that the perception of the world around us isn't coming at us, but from us. Objects around us have no inherit nature without being observed (did the tree in the forest fall if nobody sees it?). The true nature of things is emptiness which Buddhist practise tries to perceive on the road to enlightenment.
My scientific mind screams: that cannot be! Nature is as it is, it doesn't care if it is perceived or not. Gravity was there, long before Newton was hit by an apple and works as it works, regardless of our understanding for it.
I have a tried and tested strategy to deal with things I don't understand: I try to explain it to others. Works more often than not. Here we go:
The perception of Lightning changes over time.jpg
(image based on a Wikimedia original)
Looking at the perception of a lightning: over time it moved from "the gods are angry" to "a weather phenomenon" to "an electrical charge" etc. With each scientific discovery our understanding of the phenomenon changes - this is by the way the beauty of the scientific method: adopting our view of the world based on our expanding abilities.
With each insight we commonly believe to get closer to the real nature of things, unless you ask really smart people who confirm, every answer poses more new questions.
In a nutshell:
Our perception of "reality" is limited and defined by the filter of our abilities, preferences or insights (one man's delight can be another man's poison). Whatever we discover, we are stuck inside our ever expanding balloon of perceptions, not able to see through the skin and encounter the reality outside. To perceive the ultimate reality we need to pierce through that shell called the "I". Buddhists suggest to use deep mediation for that. Leaving the I behind, the duality between observer and observed breaks down. At that moment the realisation kicks in, that the ultimate reality is emptiness.
Sharing such an experience is (almost?) impossible, Lao Tsu tells us why in the: Dao De Ching:" The Tao that can be spoken is not the eternal Tao". So here you have it, the spiritual triptych of attempts to label what defies labeling: " Emptiness, Ultimate Reality, TAO"
Something to chew on: when the I dissolved, who had the realisation?

Posted by on 11 February 2014 | Comments (1) | categories: After hours

Domino Development - Back to Basics - Part 6: Better safe than sorry - Security


Continuing from Part 5, this installment will shed a light on security.
Domino applications are protected by a hirarchical control system. If you fail to pass one hierachy level's test, it doesn't matter if a lower level would be compatible to you current credentials. E.g. when a database would allow anonymous access, but the server is configured to require authentication, you must authenticate. To fully understand the options, sit back and recall the difference between authentication and authorization. The former establishes who you are, the later what you can do. Let's look at both parts:

Authentication

When using a Notes client (including XPiNC) or running a Domino server, the identity is established using a public-private key challenge. The key is stored in the Notes.id file and (usually) protected by a password. So it qualifies as a 2 factor authentication (you need something, you know something). The beauty of this PKI approach is the availability of keys for signatures and encryption. Since users hate password entries you can perform unlocking of the Notes.id using various methods for single signon.
Accessing Domino through http(s) (or SMTP, IMAP and POP) uses open standards based authentication. I will focus on the http(s) options here. When a resource is protected (more on that later) the user is presented with an authentication challenge. In its simplest form it is http BASIC authentication that prompts for a username and password. In web UIs this isn't en vogue anymore, but in REST based access pulling and pushing JSON or XML it is still quite popular, since it is simple to use.
The most prevalent form (pun intended) is form based authentication. If a protected resource requires higher access than the current known user (who is anonymous before logging in), the user is redirected to a login page, where username and password are requested. This authentication can be per server or for a group of servers using LTPA or 3rd party plug-ins.
Users typically are listed in the Domino Directory or any compatible LDAP directory configured as authentication source. Users on Domino mail need to be in the Domino directory for mail routing to work.
A typical (structural) mistake: trying to use a remote 3rd party LDAP which creates additional network latency and single point of failure (the remote LDAP). When your Domino application server is running, your directory service is available, I wouldn't step away from this robustness. If your users strategically are maintained in a more fragile directory, use TDI to keep them in sync.
The final option to authenticate users is the use of X509 certificates. This requires the public X509 key to reside in the directory and the private key in the user's browser. I haven't seen that in large rollouts since it is a pain to administrate.
Anyway, as a result of an authentication, a user is identified with an X500 compatible name:
CN=Firstname LastName/OU=OrgUnit4/OU=OrgUnit3/OU=OrgUnit2/OU=OrgUnit1/O=Organisation/C=Country
Country (/C) and the Organisation Units (/OU) are optional. Typically we see 1-2 OrgUnits in use. They are practical to distinguish permissions and profiles as well as a remedy for duplicate names. If not taken care of carefully, users authenticated through a 3rd party LDAP will follow LDAP naming conventions, which is a PITA. When @UserName() doesn't start with CN= (unless it returns Anonymous) you need to take your admins to task.
In any case, it is transparent for an application developer on Domino how a user authenticates, you only need to care that it is happening when you protect a resource. Keep in mind: authentication is only secure via https!

Authorisation

Now you know who the user is, you define what (s)he can do. Domino uses 2 mechanism: the Access Control List (ACL) to define read/write access levels and the Execution Control List (ECL) to define what code a user can run. The first level of authorization is server access:

It doesn't matter what access a user might have inside a database, if the server won't let the user access its resources. So having your admin locking down the server properly is your first line of defense. Server access is defined in the server document that also contains the ECL for the server. A normal user doesn't need ECL permissions, only the ID that is associated with the XPage (or agent) that a user wants to run. "Associated" here means: the ID the code was signed with. By default every time a developer saves an XPage, that page gets signed with the developer's ID. This is very different from file based application servers where a file doesn't contain a signature, but similar to the JAR signing in the Java world. Keep in mind: in Java that is a manual process using a command line tool and requires the acquisition of an (expensive) code signing certificate, while in Domino it is automatic.
Common practise however is, not to allow a developer to run code on a production server (unless you are Paul), but signing the database with either the server.id or a specific signer ID. This step would be performed by your admin.
When you access an NSF, the access level is set in the ACL to one of these values: No Access, Depositor, Reader, Author, Editor, Designer, Manager.
Additionally a user can have roles assigned to her, that allow a refinement of interaction and access. Roles are always used with the name in square brackets. Typical roles are [server] or [Admin]. Roles are defined per NSF. The following tables shows the permitted operations per access level:

Read Access


Having read access to a database doesn't automatically make all documents visible. They still can be protected by reader fields. More below

Write Access


Having Author access to the database does enable a user to edit any document where the user is listed in one or more items of type author, While Editor access allows to edit any document the user can see. So as a rule of thumb:
The access level for normal users to a Notes/Domino based application should be set to Author

Reader & Author fields

They are quite unique to Domino, since they allow declarative protection of a note. If you implement (in code) something similar in an RDBMS, you get quite a complex relation:

Looking at the above table you could conclude: Author fields are only relevant for users with exactly Author access to a database: if the access is lower they won't be able to edit anything, if the access is higher, they can edit anything. However in conjunction with Reader fields they have a second purpose. Reader protection is defined as:
When a document has at least one item of type Readers that has content (non-empty), then access to that document is restricted to users and servers that are explicitly (by name) or implicitly (by group membership, role or wildcard) listed in any of the Reader or Author items.
A typical pattern in a Notes application would be a computed Author field with the value [server]. The application server would get the role in the ACL. Since a server most likely has Manager access anyhow, it has initially no impact on the application. However when some mechanism activates reader protection, this field ensures that the server can see all documents. It is important to use an Author field/item here, to avoid triggering read protection where it isn't necessary. Reader fields have a performance cost, so use them wisely.
A popular mechanism is to provide a checkbox "Confidential" and then compute a reader field: @if(Confidential!="";DocumentPeople;""), where "DocumentPeople" would be an item of type Names. In workflow applications, the content of reader and author fields isn't static, but changes with the progress of the workflow. Careful planning is required.

There is more

Notes can refine access further. Documents can be encrypted and signed, as well as sections in a document can be access controlled or signed. However those capabilities are not yet exposed to the XPages API and are thus of limited interest for new XPages developers.

Further readings

There are a number of related articles I wrote over the years: Questions? Leave a comment or ask on Stackoverflow

Posted by on 04 February 2014 | Comments (0) | categories: IBM Notes XPages

Download Connect 2014 presentation files


The show is over and the annual question arises: how do I download all the presentations? To do that, you will need a valid username and password for the Connect 2014 site, no anonymous access here. The 2014 site is build on IBM Portal and IBM Connections. IBM Connections has a ATOM REST API, that opens interesting possibilities. With a few steps you can get hands on all files. I will use CURL to do this.
  1. Create or edit your .netrc file to add your Connect 2014 credentials (in one line)
    machine connections.connect2014.com login [YourNumericID] password [YourNumericPassword] (Note [ and ] are NOT part of the line in the .netrc file)
  2. Download the feed. Checking this morning, I found a little more than 500 files. The Connections API allows for max 500 entries per "page", so 2 calls will be sufficient for now. You can check the number of files in the <snx:rank> element in the resulting XML:
    curl --netrc -G --basic -L 'https://connections.connect2014.com/files/basic/anonymous/api/documents/feed?sK=created&sO=dsc&visibility=public&page=1&ps=500' > page1.xml
    curl --netrc -G --basic -L 'https://connections.connect2014.com/files/basic/anonymous/api/documents/feed?sK=created&sO=dsc&visibility=public&page=2&ps=500' > page2.xml
    (explanation of parameters below)
  3. Transformt the resulting files to a shell script using XSLT (see below) java -cp saxon9he.jar net.sf.saxon.Transform -t -s:page1.xml -xsl:connect2014.xslt -o:page1.sh
  4. Make the scripts executable (unless your OS would execute arbitrary files) chmod +x page1.sh
  5. Run the download ./page1.sh
You are dealing with 2 sets of parameters here:
  • the CURL parameters
    • --netrc: pull the user name and password from the .netrc file
    • -G: perform a GET operation
    • --basic: use basic authentication
    • -L: follow redirects (probably not needed here)
    • (optional) -v: verbose output
  • the Connections Files API parameters
    • sK=created: sort by creation date
    • sO=dsc: sort decending
    • visibility=public: show all public files
    • page=1|2: what page to show. Start depends on page size
    • ps=500: Show 500 files per page (that's the maximum Connections supports
As usual: YMMV

Read more

Posted by on 04 February 2014 | Comments (a) | categories: IBM