Tuesday 8 December 2015

A year in the cloud with IBM (part 2)

Yesterday I left you wondering well.. what next?

Now to start with we were not an on-prem connections user before we moved to the cloud. The majority of users licenses we obtained were for all-singing all dancing everything with bells on, with only 1% mail only (and a little bit of FILEage).

So were did people start with all the added extras?

Meetings, Files and Docs were the first out of the blocks.

Meetings got off to a good start then stumbled not because of anything intrinsically wrong but google decided that Chrome was not going to support Java and that put the kibosh on the plugin, rendering a good 25% of the user base with their default browser set to something that gives the cheerful but rather annoying message "Your Browser Is Not Currently Supported" while changing to another browser is not the end of the world it is when you, as a user, have invested time and energy getting the browser you are currently one working just the way you want it. However prior to this we have a fixed number of Goto Meetings which came with its own set of problems and a fairly heavy price tag. Whereas Meetings was built in to the lic cost and did not require a promise in triplicate in blood that the meeting was important to the gods of facilities. A quick trip to Maplins and a few USB headsets later (for lending to people) and meetings is being used daily. We are looking at the Audio-bridge add-on to start in the new year to give dial-in facilities and I have to say I am quite impressed with the costs that I am being quoted.

Files and Docs meant that all of a sudden those large CAD designs and 1,000,000 page image heavy powerpoints could now be shuffled around the plants and offices without the need for IT to host them on a web or ftp server somewhere and arrange logons.
.... but ....
"Where were the folders?" was the first plaintive yell , followed soon after by "We want our folders!"
To begin with I thought this was a far enough comment to make, years and years of sticking things in deeply nested folder trees that made sense at the time they were created was the happy place most users lived in. It took some time for both me and them to realize that the fastest way to find things was using TAGS and DESCRIPTIONS. Once you build up the metadata that surrounds the file in such a way that gives them context, then folders are not that important, nice to have but not vital. Indeed once they got their feet under them and discovered that not only TAGS and DESCRIPTIONS but the collaborative comments all joined together to give the object a lot more context than a folder tree and a stupidly long file name. [go on admit it you too have created a file that has a name like Estimated_ROI_On_Project_SMCD90a2105_Draft_For_Review_Novemeber.xlsx] A bit of forethought meant that life without folders was not so bad. Nested folders are coming soon but in the interim people have learned and are learning that they are not the only paradigm for successful file management.

Docs were slower to take off yet they are now becoming the de facto method for creating the normal day to day docs that are the bread and butter of a manufacturing company. They no longer exist on 30 or 40 inboxes, 30 or 40 hard drives or in a myriad of USB drives all in varying states of being out of date. Now they are in FILES or Communities being worked on, the versions being tracked and comments full of what used to happen in emails.

Where docs is not useful is the power user the Dashboard King and Pivot Table Prince, however they have settled in nicely with the PC Connector and Sync plugins (and mobile app).They are are now merrily obscuring shortcomings and up-selling success in a myriad of multi-hued  yet meaningless graphs and gauges all syncing nicely up and down the cloud, leaving a neat trail of versions behind them for those cursed with the mind of an auditor.

I had to have a bit of think about the whole Connections "thing" and it occurred to me that
unless you are some pale spotty youth, you will have at least some level of professional expertise, even the keep-in-the-dark-well-away-from-customers people like me. We’ve all got a certain unique set of skills, knowledge and experience that make us an asset to our organization. I have to say I have been lobbying to get Bog Snorkeling, dressing up as Spiderman and Dandering long distances recognized as assets with, i have to say, limited success but I am ever hopeful.

So there I was sitting at my desk between cups of coffee when a beam of sunlight came through the window and suddenly all was clear, I was having a damascene moment and all before 11am!

It occurred to me that there where questions ... what are we really doing with these assets? Are we like the Squirrels of Westeros hoarding away your nuts because "winter is coming:... ?. Are we saving all that goodness for ourselves? Are we using our expertise to further our own careers without ever  considering how it might help others? I know it sounds a little odd, but expertise is a powerful gift that deserves to be shared. It’s yours, and yes; you earned it. But why keep all that wisdom to yourself? Why not send it out into the world to be free and lift others to new heights as well?

Then someone mentioned it as time for a bacon butty and a 5 shot extra sweet espresso and I lost my train of thought and when I returned to my desk I was left with the difficult task of how do i persuade the user base that "Sharing is good .. let's share"

But more of that in part 3


Monday 7 December 2015

A year in the cloud with IBM (Part 1)

So ... it is very nearly a year since my last post and what a year it has been!

I have been a busy boy!

I and my team of admins have moved the entire European and Asian workforce from our on-prem Domino servers to the IBM Smartcloud servers. We elected to have a Hybrid environment keep our many and varied apps on a couple of on-prem servers and shift mail totally to the IBM cloud servers (What used to be called IBM Smartcloud). We also elected, thanks to the IBM UK's account team, to make the majority of our users "Full"  users provisioned with the complete menu of interesting stuff that the cloud offers: Mail (Verse), Meetings, Chat, Connections, Traveler and Archive Essentials.

Now I could say that the migration and provisioning of our users was a smooth and fault free experience, but I can't. We ran up against some issues where provisioning was fraught with problems that reduced the migration to a crawl. These problems have since been addressed and from early March this year we have had no problems at all.

The on-boarding tools when we started did not really suit what we needed to do. While I would have preferred to leave the old mail files in place as archives where they could access and manage their old mail as normal, starting the users with empty mail files in the cloud this was considered by the users community to be a "non runner". Neither did we want to migrate nearly a petabytes of old mail to the servers so we reached a compromise and moved 8-12 weeks of mail and calendar data from the live mail file to the cloud leaving the old mail file as a local replica on the workspace as an archive. (Apart that is from accountants, what is it about accountants that they need every mail they every received since 1995? *sigh*) So in the absence of a free-tool (I was on a very tight budget) that would do what I need. I wrote a set of agents that would move :
  • Folders
  • Rules
  • Profiles
  • Mail
  • Calendar 
  • Todo
By date to the cloud.

This worked really quite well apart from a few gotchas the main one of these being Google Meeting Invites, not all of them just the ones that have "Never" as a "repeats ends" attribute. This, I discovered, creates a 10 year repeat notes calendar entry if the user accepts it. So that daily conference call had 1000's of dates in the calendar doc. That needed some serious tweaking!

We had a milestone date of April 1st to get the Asian and European workforce migrated and with the help of our long suffering on-boarding team and the local support folks in IBM Dublin, (who can now swear almost as well as me) we managed to get the last planned user migrated on the 4th April, which all in all was  excellent. The problems we did have were 99% invisible to the users, all they saw was my team coming around warning them they would be moved sometime in the next 24 hours and they were.

Having moved the users' mail to the cloud, we started consolidating data onto what will become our on-prem App Servers, most of these had been doubling as mail servers and suddenly with no mail running they started to preform much better.

The old QUICKR server was a bit of a problem. The quickr environment was very stable and just sat in the corner and ran year after year, every now and then needing more disk space and a fixpack. Once again we had a what to do with the data? Quite a few of the places where there purely for historical purposed so they were put on the "Whenever" low priority list. We focused on the places currently in fairly constant use and created a connections community for each place.

Quickr Files were dead easy. 
1. Set up a Quickr Connector on my PC to the place, copied the files to a local directory
2. Piped a DIR to a text file
3. Set up a wee PHP server on my PC using XAMPP
4. Using PHP read the file from 2, checked for duplicate file names (the cloud don't like duplicates)

5. Used the POST /files/{auth}/cmis/repository/{repositoryId}/folderc/snx:files API to upload the file
6.Then use the API to TAG the file with the old folder structure name

Job Done. 
Quickr Docs were more problematic and required a placebot to dump the docs to text files and then uploaded using the WIKI Post API.

Once a community had been populated I added the Quickr place managers to the community and showed them how to work it, once they had cried about the lack of folder nesting and saw how fast TAGs can be searched for, they sucked up their tears and got on with it and have been using their communities in anger for some months.


One thing became clear very quickly, the lack of a Mail-In function was a bit of a bollox to the quickr place managers. I have something in test that allows an on-prem mail in DB that has an agent that detaches any attachments and takes the mime text and posts to a given community as FILES and a BLOG entry with a link to the FILE (if any). The BLOG post is posted as cloud user call "AVX Auto-Post" and the original sender internet address becomes a mention pre-pended to the body and the subject (minus the FWD and RE) prefixes becomes the subject of the Blog.

We will be using the same process to post updates from the "internet of things-that-go-beep" on the shop floor to post to communities of engineers and manufacturing managers, so they can be notified promptly about issues and discuss it in the cosy shared confines of a community rather than in 101 emails. We have done a POC and got a rather nice "Wooo! that's good" which always does the soul good.


The other thing that has been tiring but fun is introducing my users to Connections, but that is enough for now. I tell you all about that in the next post

Tuesday 3 February 2015

Two Factor Authentication And Smartcloud (Part 3)

Right, moving on...

The 2FA process, first we need to pair the app on the device with the User Id. So let's look at the process that does this.

The aim here is to make the mobile device as anonymous as possible and by that I mean there is nothing on it that will expose the first factor credentials.


  1. When the app installs it is preconfigured with the server's address
  2. The app requests a new DEVID from the server
  3. The server creates a Unique ID and stores it in a session variable
  4. The server then returns the DEVID to the device which stores it in it's own config
  5. The app on receipt prompts the user to go to a URL on their PC and get a passcode
  6. The user goes to the URL on a separate device usually a PC and Logs on using their UserID and Password.
  7. The server generates a 9 digit pass code saves it in the User Record table
  8. The User enters the 9 digit passcode in the prompt on the phone
  9. The app sends the 9 digit code to the server
  10. The server looks for the 9 digit code in the User table
  11. The server then sends a request for more information from the device.
  12. The device responds with DEVID, phone number and IMEI number
  13.  The server then stores this information against the user in the back end DB
  14.  All that is on the app is the DEVID
When this is complete the user has a "paired" device with the server and although the phone knows only that it has a DEVID it knows nothing about the user at all.

When the user's phone is online the app will register it's presence by sending a request to the server saying "I am here and I am online".

So when a user Signs-In and the server decides that 2FA is required (See last post for the logic used to decided this) the following happens:

  1. The server looks up the DEVID associated with the user (who has passed the first factor validation) If there is no DEVID the Sign-In Attempt fails
  2. The server creates a Transaction ID and stores this with the DEVID in a DB table with a status of WAITING
  3. The server sends the Transaction ID back to the browser and the browser starts a timer bases AJAX call to poll the server using the Transaction ID to see if the status changes
  4. Lastly the server pushes an message to the DEVID and the app generates a prompt for the user where they must click OK or CANCEL to continue.
  5. If the app returns the user's response to the server and the response is stored in the Transaction DB as OK or CANCEL. If the request times out with not response then the status on the DB is set to FAILED.
  6. The user's browser has been polling the server looking at the Transaction Table and notes the transaction change. If it changes to OK then the SAML token is constructed and is sent to the Smartcloud server. Any other change results in an error being displayed on the users browser.

    You will note that no information about the phone is sent or stored in the browser and no information about the browser or user is sent to the phone. The connection is conducted through the server.
From a user perspective, they enter their UserID and Password, Click the SIGN IN button if 2FA is required a window will appear telling them to get their 2FA device. They open their device, open the app, tap OK and they are signed on.

YIPPEEEEE! I hear you say you have 2FA up and running.

I know for most of us Geeks we are never far from our mobile devices, we keep them close and do the "WKP" check at least every 5 minutes (WPK== Wallet,Keys,Phone). Users don't they forget their phones, they drop them into toilets, sinks, swimming pools, jacuzzi (with or without buxom ladies) ,bend them, break them, put them in the mircowave (Honestly this happened, to and i quote "Dry it out after I dropped it in a pint of beer" ) , get them stolen ("She seemed like such a nice lady in the bar")... and you can be rest assured that this calamity will occur just when they are expecting an email that they really really really need to read and reply to or "ALL HELL WILL BREAK LOOSE!". I am sure you know what I mean.


Given that we all know what eejits users are we need to give them an alternate method of achieving Sign-In on those occasions when they for whatever reason find themselves without their paired devices and these alternatives I will expand on in the next post.


Two Factor Authentication And Smartcloud (Part 2)

Following on from the last post we had a idea for a solution to the problem of attaching two factory authentication (2FA) to Smartcloud, now what we needed was a more detailed "story" that would define the Sign-In Process we would use.

The first factor

The first factor is "something you know" which for us like nearly every application is the combination of User ID and Password. Smartcloud requires the remote IdP to pass the validated User ID but not the password in the SAML token and this User ID must be the user's email address as it is provisioned in the Smartcloud service.

The password needs to be strong, at least following the 8x4 rule.
8 Characters long and the characters should be a mixture of 4 types

  1. Lower case letters
  2. Upper case letters
  3. Numbers
  4. Special Characters
Any system would have to enforce this minimum policy.

Having a complex password alone does not protect the user's account, Phishing, keyloggers, man-in-the-middle attacks, having someone ask "What's you password?" not to mention the unfortunate habit of saving your Sign-In details in your browser means there is more than a small chance that an account's first factor will be compromised at some time.

The second factor

This is something you have, mitigates against this risk of the first factor being compromised. There are several types of second factor. Dongles that contain PKI Signatures, Biometric Scans and apps that run on a separate device usually a mobile phone or tablet.

USB Dongles are possibly the most secure but unless you have well trained users who do not loose them, do not leave them in the PC and do not figure out a way to save an episode of The Big Bang to it. There is also a cost involved providing everyone in the organisation with the dongle and the PKI certificates.

Biometrics are now becoming popular with fingerprint scan and eye scans. However this is even more expensive than the USB Dongles as not all hardware comes with a biometric reader and older PCs may not support peripheral devices.

Mobile Apps are the easiest way to get a second factor. The app gets a "pushed" request  from the IdP, presents the user with a message they must acknowledge and this mobile app acts as the thing you have. While possible it is unlikely both the phone and the PC will be stolen and if one is stolen the other is useless without the other.

Needless to say the app must NOT contain either the User ID or the Password, incase it is stolen.

The Sign-In Process

We thought about this long and hard and the process goes something like this

  1. The user Signs-In
  2. The user id and password are validated and the process exits if invalid
  3. Is the user id "active" if not exit the process
    This allows the admin to flag a user as ACTIVE or DELETED thus stopping access selectively
  4. Get the IP address from the posted header is it black listed? If Yes Exit
    This allows us to blacklist known "bad" IP locations.
  5. Get the IP address again is it whitelisted? If Yes send the SAML token with no 2FA
    This allows us to whitelist internal networks as "Safe" and therefore not requiring 2FA
  6. What sort of 2FA PROFILE does the user have?
    This is another special user attribute which can be:
    ALWAYS - The user is ALWAYS 2FAed and 2FA begins now
    NEVER - The user is NEVER 2FAed and the SAML TOKEN is sent now
    NORMAL - The Process continues
    This gives the admin the flexibility to force (or not) 2FA on a user.
  7.  What browser/pc is the user trying to access from?
    I will be covering this in some depth in a later post under "Fingerprinting"
    I can say it does NOT contain cookies!
  8. If the user has a NORMAL 2FA profile the last time they were 2FAed is tested and if it is greater than 7 days 2FA is requested if less than then the SAML TOKEN is sent.

    And that is basically what we coded for.
In the next post I will look at the 2FA process in some detail. I bet you can't wait.

 


Monday 2 February 2015

Two Factor Authentication And Smartcloud (Part 1)

The next set of posts, the first in over a year, will explore my latest project, attaching two factor authentication to IBM Smartcloud. This is the topic I bored the pants off people at ConnectED with this year, mainly because I am rather proud of doing it and it has a certain XML parsed coolness. So without further ado this is the first post of a multiple series that tells the story of how I added two factor authentication to Smartcloud for less than $20 Alterian dollars a day.

WHY?

A. SmartCloud does not have it.
B. Google / Office 365 etc do have it.
C. Smartcloud is considered less secure because it does not have it and the others do.

Now whether or not C is actually the case is a moot point. When you line up a comparison table of functions available from the enterprise cloud providers. CIOs and CTOs notice that Two Factor Authentication (2FA) is missing in the Smartcloud column and they consider that to be a failing. A failing, sufficiently notable, to discount Smartcloud from consideration as a cloud based solution.


Such was the thinking in my case. Whilst a Hybrid model Smartcloud deployment ticked all the boxes for user functionality: Notes Mail, Calendar, To Do, Contacts, Connections, Files, Sametime, Meetings, Traveler, Connections Mobile and support for the myriad of our own applications. All this was for nothing if Smartcloud was considered less secure because of the absence of 2FA.

The addendum to the 2FA requirement was, as Google et al had 2FA built in as part of the subscription price any solution we provided needed to be without a noticeable increase in cost per user per month.


HOW?

Well that stinger of minimal cost was the Prime Directive as far as our solution was concerned. There are plenty of Identity Providers (idP) out there that will supply you with 2FA facilities however these will cost you money, $2-$10 per user per month. So by definition these solutions however laudable were outside the bounds of consideration.

We had to this ourselves and we had to do it quickly.


Smartcloud allows for Federated Logon, where the sign-in process is passed to a third party IdP and once it has done all that it needs to do to verify the user's identity it passes a SAML token back to Smartcloud (aka the service provider SP) which allows the user to log on.

The Smartcloud servers do not care what the IdP does other than it has to pass a properly formatted SAML token back to the cloud. What we needed was something we could host on-prem that would validate the user and when required process the 2FA.


Smartcloud has several flavours of Federation

  • Normal - All users use Smartcloud for Sign-In
  • Federated - All users use a third party IdP to Sign-In
  • Hybrid - The user can choose to log on from either the third party IdP or Smartcloud
  • Partial - The Admins chooses the server the user will use to Sign-In
The best fit for our purposes was Partial as this left the choice of security to the Admin teams and as such we could enforce the security policies in such a way that we could guarantee that they were being followed while still leaving the option to switch a user back to IBM only security validation should the need arise. (eg a catastrophic failure of the on-prem IdP)


So with that taken care of we now had to select an IdP that would allow us to:
  1. Validate the user with the first factor (Userid and Password) 
  2. Allow us to control the 2FA process using a second factor
  3. Send IBM a properly formed SAML token

Validating the User with the first factor

There are four things to consider here.
  1. The data source that we will store the data attributes of the users
  2. The code that does the Initial Validation
  3. The code that does the 2FA
  4. The code that creates the SAML Token
The data store can be anything DB2, MsSQL, MySQL, LDAP. However as we shall see in a later post there are user attributes and separate session attributes the complexity of which made me discount LDAP as a data source.
 

The code was a thorny problem, while some platforms allow for user validation and SAML token production they do not provide easy hooks that allow you to interrupt the Sign-In process and insert the 2FA process and rightly so as this would be a security hole.The complexity of this avenue was discounted, although may want to explore it further.

My core competencies are in RPG/DB2, PHP/DB2 and PHP/MySQL and both allow for complex coding and data stores. The deciding factor was the production of the SAML token. There is an excellent open source SAML framework called SimpleSAMLphp This Framework allows you to create an IdP that will do the first factor (Username and Password) validation, allows you to add in your Second Factor Authentication code and produce a correctly formed SAML token which is posted to the Smartcloud all using PHP.

Platform

The platform choice was an internal one, we were already using our System i's for other web purposes and running the IdP on an SSLed port other than 443 while not impossible was going to be awkward because of the format of the URLs the SAML exchange requires. So the platform of choice for us was a LAMP server again because after System i, this is were our core competencies lie.


Conclusion

So we now had a starting point. A LAMP server with SimpleSAMLphp installed storing all the data we need in MySQL tables. Next we moved on to defining the Sign-In Process in detail and that we will explore in the next post.


 

Disqus for Domi-No-Yes-Maybe