Tuesday 8 December 2015

A year in the cloud with IBM (part 2)

Yesterday I left you wondering well.. what next?

Now to start with we were not an on-prem connections user before we moved to the cloud. The majority of users licenses we obtained were for all-singing all dancing everything with bells on, with only 1% mail only (and a little bit of FILEage).

So were did people start with all the added extras?

Meetings, Files and Docs were the first out of the blocks.

Meetings got off to a good start then stumbled not because of anything intrinsically wrong but google decided that Chrome was not going to support Java and that put the kibosh on the plugin, rendering a good 25% of the user base with their default browser set to something that gives the cheerful but rather annoying message "Your Browser Is Not Currently Supported" while changing to another browser is not the end of the world it is when you, as a user, have invested time and energy getting the browser you are currently one working just the way you want it. However prior to this we have a fixed number of Goto Meetings which came with its own set of problems and a fairly heavy price tag. Whereas Meetings was built in to the lic cost and did not require a promise in triplicate in blood that the meeting was important to the gods of facilities. A quick trip to Maplins and a few USB headsets later (for lending to people) and meetings is being used daily. We are looking at the Audio-bridge add-on to start in the new year to give dial-in facilities and I have to say I am quite impressed with the costs that I am being quoted.

Files and Docs meant that all of a sudden those large CAD designs and 1,000,000 page image heavy powerpoints could now be shuffled around the plants and offices without the need for IT to host them on a web or ftp server somewhere and arrange logons.
.... but ....
"Where were the folders?" was the first plaintive yell , followed soon after by "We want our folders!"
To begin with I thought this was a far enough comment to make, years and years of sticking things in deeply nested folder trees that made sense at the time they were created was the happy place most users lived in. It took some time for both me and them to realize that the fastest way to find things was using TAGS and DESCRIPTIONS. Once you build up the metadata that surrounds the file in such a way that gives them context, then folders are not that important, nice to have but not vital. Indeed once they got their feet under them and discovered that not only TAGS and DESCRIPTIONS but the collaborative comments all joined together to give the object a lot more context than a folder tree and a stupidly long file name. [go on admit it you too have created a file that has a name like Estimated_ROI_On_Project_SMCD90a2105_Draft_For_Review_Novemeber.xlsx] A bit of forethought meant that life without folders was not so bad. Nested folders are coming soon but in the interim people have learned and are learning that they are not the only paradigm for successful file management.

Docs were slower to take off yet they are now becoming the de facto method for creating the normal day to day docs that are the bread and butter of a manufacturing company. They no longer exist on 30 or 40 inboxes, 30 or 40 hard drives or in a myriad of USB drives all in varying states of being out of date. Now they are in FILES or Communities being worked on, the versions being tracked and comments full of what used to happen in emails.

Where docs is not useful is the power user the Dashboard King and Pivot Table Prince, however they have settled in nicely with the PC Connector and Sync plugins (and mobile app).They are are now merrily obscuring shortcomings and up-selling success in a myriad of multi-hued  yet meaningless graphs and gauges all syncing nicely up and down the cloud, leaving a neat trail of versions behind them for those cursed with the mind of an auditor.

I had to have a bit of think about the whole Connections "thing" and it occurred to me that
unless you are some pale spotty youth, you will have at least some level of professional expertise, even the keep-in-the-dark-well-away-from-customers people like me. We’ve all got a certain unique set of skills, knowledge and experience that make us an asset to our organization. I have to say I have been lobbying to get Bog Snorkeling, dressing up as Spiderman and Dandering long distances recognized as assets with, i have to say, limited success but I am ever hopeful.

So there I was sitting at my desk between cups of coffee when a beam of sunlight came through the window and suddenly all was clear, I was having a damascene moment and all before 11am!

It occurred to me that there where questions ... what are we really doing with these assets? Are we like the Squirrels of Westeros hoarding away your nuts because "winter is coming:... ?. Are we saving all that goodness for ourselves? Are we using our expertise to further our own careers without ever  considering how it might help others? I know it sounds a little odd, but expertise is a powerful gift that deserves to be shared. It’s yours, and yes; you earned it. But why keep all that wisdom to yourself? Why not send it out into the world to be free and lift others to new heights as well?

Then someone mentioned it as time for a bacon butty and a 5 shot extra sweet espresso and I lost my train of thought and when I returned to my desk I was left with the difficult task of how do i persuade the user base that "Sharing is good .. let's share"

But more of that in part 3


Monday 7 December 2015

A year in the cloud with IBM (Part 1)

So ... it is very nearly a year since my last post and what a year it has been!

I have been a busy boy!

I and my team of admins have moved the entire European and Asian workforce from our on-prem Domino servers to the IBM Smartcloud servers. We elected to have a Hybrid environment keep our many and varied apps on a couple of on-prem servers and shift mail totally to the IBM cloud servers (What used to be called IBM Smartcloud). We also elected, thanks to the IBM UK's account team, to make the majority of our users "Full"  users provisioned with the complete menu of interesting stuff that the cloud offers: Mail (Verse), Meetings, Chat, Connections, Traveler and Archive Essentials.

Now I could say that the migration and provisioning of our users was a smooth and fault free experience, but I can't. We ran up against some issues where provisioning was fraught with problems that reduced the migration to a crawl. These problems have since been addressed and from early March this year we have had no problems at all.

The on-boarding tools when we started did not really suit what we needed to do. While I would have preferred to leave the old mail files in place as archives where they could access and manage their old mail as normal, starting the users with empty mail files in the cloud this was considered by the users community to be a "non runner". Neither did we want to migrate nearly a petabytes of old mail to the servers so we reached a compromise and moved 8-12 weeks of mail and calendar data from the live mail file to the cloud leaving the old mail file as a local replica on the workspace as an archive. (Apart that is from accountants, what is it about accountants that they need every mail they every received since 1995? *sigh*) So in the absence of a free-tool (I was on a very tight budget) that would do what I need. I wrote a set of agents that would move :
  • Folders
  • Rules
  • Profiles
  • Mail
  • Calendar 
  • Todo
By date to the cloud.

This worked really quite well apart from a few gotchas the main one of these being Google Meeting Invites, not all of them just the ones that have "Never" as a "repeats ends" attribute. This, I discovered, creates a 10 year repeat notes calendar entry if the user accepts it. So that daily conference call had 1000's of dates in the calendar doc. That needed some serious tweaking!

We had a milestone date of April 1st to get the Asian and European workforce migrated and with the help of our long suffering on-boarding team and the local support folks in IBM Dublin, (who can now swear almost as well as me) we managed to get the last planned user migrated on the 4th April, which all in all was  excellent. The problems we did have were 99% invisible to the users, all they saw was my team coming around warning them they would be moved sometime in the next 24 hours and they were.

Having moved the users' mail to the cloud, we started consolidating data onto what will become our on-prem App Servers, most of these had been doubling as mail servers and suddenly with no mail running they started to preform much better.

The old QUICKR server was a bit of a problem. The quickr environment was very stable and just sat in the corner and ran year after year, every now and then needing more disk space and a fixpack. Once again we had a what to do with the data? Quite a few of the places where there purely for historical purposed so they were put on the "Whenever" low priority list. We focused on the places currently in fairly constant use and created a connections community for each place.

Quickr Files were dead easy. 
1. Set up a Quickr Connector on my PC to the place, copied the files to a local directory
2. Piped a DIR to a text file
3. Set up a wee PHP server on my PC using XAMPP
4. Using PHP read the file from 2, checked for duplicate file names (the cloud don't like duplicates)

5. Used the POST /files/{auth}/cmis/repository/{repositoryId}/folderc/snx:files API to upload the file
6.Then use the API to TAG the file with the old folder structure name

Job Done. 
Quickr Docs were more problematic and required a placebot to dump the docs to text files and then uploaded using the WIKI Post API.

Once a community had been populated I added the Quickr place managers to the community and showed them how to work it, once they had cried about the lack of folder nesting and saw how fast TAGs can be searched for, they sucked up their tears and got on with it and have been using their communities in anger for some months.


One thing became clear very quickly, the lack of a Mail-In function was a bit of a bollox to the quickr place managers. I have something in test that allows an on-prem mail in DB that has an agent that detaches any attachments and takes the mime text and posts to a given community as FILES and a BLOG entry with a link to the FILE (if any). The BLOG post is posted as cloud user call "AVX Auto-Post" and the original sender internet address becomes a mention pre-pended to the body and the subject (minus the FWD and RE) prefixes becomes the subject of the Blog.

We will be using the same process to post updates from the "internet of things-that-go-beep" on the shop floor to post to communities of engineers and manufacturing managers, so they can be notified promptly about issues and discuss it in the cosy shared confines of a community rather than in 101 emails. We have done a POC and got a rather nice "Wooo! that's good" which always does the soul good.


The other thing that has been tiring but fun is introducing my users to Connections, but that is enough for now. I tell you all about that in the next post

Tuesday 3 February 2015

Two Factor Authentication And Smartcloud (Part 3)

Right, moving on...

The 2FA process, first we need to pair the app on the device with the User Id. So let's look at the process that does this.

The aim here is to make the mobile device as anonymous as possible and by that I mean there is nothing on it that will expose the first factor credentials.


  1. When the app installs it is preconfigured with the server's address
  2. The app requests a new DEVID from the server
  3. The server creates a Unique ID and stores it in a session variable
  4. The server then returns the DEVID to the device which stores it in it's own config
  5. The app on receipt prompts the user to go to a URL on their PC and get a passcode
  6. The user goes to the URL on a separate device usually a PC and Logs on using their UserID and Password.
  7. The server generates a 9 digit pass code saves it in the User Record table
  8. The User enters the 9 digit passcode in the prompt on the phone
  9. The app sends the 9 digit code to the server
  10. The server looks for the 9 digit code in the User table
  11. The server then sends a request for more information from the device.
  12. The device responds with DEVID, phone number and IMEI number
  13.  The server then stores this information against the user in the back end DB
  14.  All that is on the app is the DEVID
When this is complete the user has a "paired" device with the server and although the phone knows only that it has a DEVID it knows nothing about the user at all.

When the user's phone is online the app will register it's presence by sending a request to the server saying "I am here and I am online".

So when a user Signs-In and the server decides that 2FA is required (See last post for the logic used to decided this) the following happens:

  1. The server looks up the DEVID associated with the user (who has passed the first factor validation) If there is no DEVID the Sign-In Attempt fails
  2. The server creates a Transaction ID and stores this with the DEVID in a DB table with a status of WAITING
  3. The server sends the Transaction ID back to the browser and the browser starts a timer bases AJAX call to poll the server using the Transaction ID to see if the status changes
  4. Lastly the server pushes an message to the DEVID and the app generates a prompt for the user where they must click OK or CANCEL to continue.
  5. If the app returns the user's response to the server and the response is stored in the Transaction DB as OK or CANCEL. If the request times out with not response then the status on the DB is set to FAILED.
  6. The user's browser has been polling the server looking at the Transaction Table and notes the transaction change. If it changes to OK then the SAML token is constructed and is sent to the Smartcloud server. Any other change results in an error being displayed on the users browser.

    You will note that no information about the phone is sent or stored in the browser and no information about the browser or user is sent to the phone. The connection is conducted through the server.
From a user perspective, they enter their UserID and Password, Click the SIGN IN button if 2FA is required a window will appear telling them to get their 2FA device. They open their device, open the app, tap OK and they are signed on.

YIPPEEEEE! I hear you say you have 2FA up and running.

I know for most of us Geeks we are never far from our mobile devices, we keep them close and do the "WKP" check at least every 5 minutes (WPK== Wallet,Keys,Phone). Users don't they forget their phones, they drop them into toilets, sinks, swimming pools, jacuzzi (with or without buxom ladies) ,bend them, break them, put them in the mircowave (Honestly this happened, to and i quote "Dry it out after I dropped it in a pint of beer" ) , get them stolen ("She seemed like such a nice lady in the bar")... and you can be rest assured that this calamity will occur just when they are expecting an email that they really really really need to read and reply to or "ALL HELL WILL BREAK LOOSE!". I am sure you know what I mean.


Given that we all know what eejits users are we need to give them an alternate method of achieving Sign-In on those occasions when they for whatever reason find themselves without their paired devices and these alternatives I will expand on in the next post.


Two Factor Authentication And Smartcloud (Part 2)

Following on from the last post we had a idea for a solution to the problem of attaching two factory authentication (2FA) to Smartcloud, now what we needed was a more detailed "story" that would define the Sign-In Process we would use.

The first factor

The first factor is "something you know" which for us like nearly every application is the combination of User ID and Password. Smartcloud requires the remote IdP to pass the validated User ID but not the password in the SAML token and this User ID must be the user's email address as it is provisioned in the Smartcloud service.

The password needs to be strong, at least following the 8x4 rule.
8 Characters long and the characters should be a mixture of 4 types

  1. Lower case letters
  2. Upper case letters
  3. Numbers
  4. Special Characters
Any system would have to enforce this minimum policy.

Having a complex password alone does not protect the user's account, Phishing, keyloggers, man-in-the-middle attacks, having someone ask "What's you password?" not to mention the unfortunate habit of saving your Sign-In details in your browser means there is more than a small chance that an account's first factor will be compromised at some time.

The second factor

This is something you have, mitigates against this risk of the first factor being compromised. There are several types of second factor. Dongles that contain PKI Signatures, Biometric Scans and apps that run on a separate device usually a mobile phone or tablet.

USB Dongles are possibly the most secure but unless you have well trained users who do not loose them, do not leave them in the PC and do not figure out a way to save an episode of The Big Bang to it. There is also a cost involved providing everyone in the organisation with the dongle and the PKI certificates.

Biometrics are now becoming popular with fingerprint scan and eye scans. However this is even more expensive than the USB Dongles as not all hardware comes with a biometric reader and older PCs may not support peripheral devices.

Mobile Apps are the easiest way to get a second factor. The app gets a "pushed" request  from the IdP, presents the user with a message they must acknowledge and this mobile app acts as the thing you have. While possible it is unlikely both the phone and the PC will be stolen and if one is stolen the other is useless without the other.

Needless to say the app must NOT contain either the User ID or the Password, incase it is stolen.

The Sign-In Process

We thought about this long and hard and the process goes something like this

  1. The user Signs-In
  2. The user id and password are validated and the process exits if invalid
  3. Is the user id "active" if not exit the process
    This allows the admin to flag a user as ACTIVE or DELETED thus stopping access selectively
  4. Get the IP address from the posted header is it black listed? If Yes Exit
    This allows us to blacklist known "bad" IP locations.
  5. Get the IP address again is it whitelisted? If Yes send the SAML token with no 2FA
    This allows us to whitelist internal networks as "Safe" and therefore not requiring 2FA
  6. What sort of 2FA PROFILE does the user have?
    This is another special user attribute which can be:
    ALWAYS - The user is ALWAYS 2FAed and 2FA begins now
    NEVER - The user is NEVER 2FAed and the SAML TOKEN is sent now
    NORMAL - The Process continues
    This gives the admin the flexibility to force (or not) 2FA on a user.
  7.  What browser/pc is the user trying to access from?
    I will be covering this in some depth in a later post under "Fingerprinting"
    I can say it does NOT contain cookies!
  8. If the user has a NORMAL 2FA profile the last time they were 2FAed is tested and if it is greater than 7 days 2FA is requested if less than then the SAML TOKEN is sent.

    And that is basically what we coded for.
In the next post I will look at the 2FA process in some detail. I bet you can't wait.

 


Monday 2 February 2015

Two Factor Authentication And Smartcloud (Part 1)

The next set of posts, the first in over a year, will explore my latest project, attaching two factor authentication to IBM Smartcloud. This is the topic I bored the pants off people at ConnectED with this year, mainly because I am rather proud of doing it and it has a certain XML parsed coolness. So without further ado this is the first post of a multiple series that tells the story of how I added two factor authentication to Smartcloud for less than $20 Alterian dollars a day.

WHY?

A. SmartCloud does not have it.
B. Google / Office 365 etc do have it.
C. Smartcloud is considered less secure because it does not have it and the others do.

Now whether or not C is actually the case is a moot point. When you line up a comparison table of functions available from the enterprise cloud providers. CIOs and CTOs notice that Two Factor Authentication (2FA) is missing in the Smartcloud column and they consider that to be a failing. A failing, sufficiently notable, to discount Smartcloud from consideration as a cloud based solution.


Such was the thinking in my case. Whilst a Hybrid model Smartcloud deployment ticked all the boxes for user functionality: Notes Mail, Calendar, To Do, Contacts, Connections, Files, Sametime, Meetings, Traveler, Connections Mobile and support for the myriad of our own applications. All this was for nothing if Smartcloud was considered less secure because of the absence of 2FA.

The addendum to the 2FA requirement was, as Google et al had 2FA built in as part of the subscription price any solution we provided needed to be without a noticeable increase in cost per user per month.


HOW?

Well that stinger of minimal cost was the Prime Directive as far as our solution was concerned. There are plenty of Identity Providers (idP) out there that will supply you with 2FA facilities however these will cost you money, $2-$10 per user per month. So by definition these solutions however laudable were outside the bounds of consideration.

We had to this ourselves and we had to do it quickly.


Smartcloud allows for Federated Logon, where the sign-in process is passed to a third party IdP and once it has done all that it needs to do to verify the user's identity it passes a SAML token back to Smartcloud (aka the service provider SP) which allows the user to log on.

The Smartcloud servers do not care what the IdP does other than it has to pass a properly formatted SAML token back to the cloud. What we needed was something we could host on-prem that would validate the user and when required process the 2FA.


Smartcloud has several flavours of Federation

  • Normal - All users use Smartcloud for Sign-In
  • Federated - All users use a third party IdP to Sign-In
  • Hybrid - The user can choose to log on from either the third party IdP or Smartcloud
  • Partial - The Admins chooses the server the user will use to Sign-In
The best fit for our purposes was Partial as this left the choice of security to the Admin teams and as such we could enforce the security policies in such a way that we could guarantee that they were being followed while still leaving the option to switch a user back to IBM only security validation should the need arise. (eg a catastrophic failure of the on-prem IdP)


So with that taken care of we now had to select an IdP that would allow us to:
  1. Validate the user with the first factor (Userid and Password) 
  2. Allow us to control the 2FA process using a second factor
  3. Send IBM a properly formed SAML token

Validating the User with the first factor

There are four things to consider here.
  1. The data source that we will store the data attributes of the users
  2. The code that does the Initial Validation
  3. The code that does the 2FA
  4. The code that creates the SAML Token
The data store can be anything DB2, MsSQL, MySQL, LDAP. However as we shall see in a later post there are user attributes and separate session attributes the complexity of which made me discount LDAP as a data source.
 

The code was a thorny problem, while some platforms allow for user validation and SAML token production they do not provide easy hooks that allow you to interrupt the Sign-In process and insert the 2FA process and rightly so as this would be a security hole.The complexity of this avenue was discounted, although may want to explore it further.

My core competencies are in RPG/DB2, PHP/DB2 and PHP/MySQL and both allow for complex coding and data stores. The deciding factor was the production of the SAML token. There is an excellent open source SAML framework called SimpleSAMLphp This Framework allows you to create an IdP that will do the first factor (Username and Password) validation, allows you to add in your Second Factor Authentication code and produce a correctly formed SAML token which is posted to the Smartcloud all using PHP.

Platform

The platform choice was an internal one, we were already using our System i's for other web purposes and running the IdP on an SSLed port other than 443 while not impossible was going to be awkward because of the format of the URLs the SAML exchange requires. So the platform of choice for us was a LAMP server again because after System i, this is were our core competencies lie.


Conclusion

So we now had a starting point. A LAMP server with SimpleSAMLphp installed storing all the data we need in MySQL tables. Next we moved on to defining the Sign-In Process in detail and that we will explore in the next post.


 

Wednesday 2 October 2013

Of Bruce Elgort , (very small) Alien Abduction, OpenNTF and changing times.

The story of Bruce.

So today is one of those rare moments of transition when something moves from one era to another. Specifically Mr Bruce Elgort is after an illustrious career at the helm of OpenNTF is standing down as chairperson ... or should that be "is standing up from the chair" because if he doesn't stand up then they will need another chair .. anyway I digress.

I first met Bruce in 2007 when we were both abducted by  aliens easily identifiable by the fact they are  physically incapable of saying the words "Three" and "Column".These aliens plied us with drink arranged bus tours gave us a bag full of  "things" (thankfully there were no USB devices that could have been used for "probing") and held captive in a hotel in Dublin for several days. It was noted that aliens in question had assimilated themselves into the geek community by pretending to be Paul Mooney and Eileen Fitzgerald. Like white mice in a maze we were subjected to a series of tests one of which was to stand in front of a room full of other abductees and talk for an hour about something interesting and educational, whilst this test was going on our alien overloads hovered in the background with clipboards and waved signs that told us what would happen to kittens if we didn't stop talking.

I remember that afternoon as if it were 6 years ago, there I was along with 50 or 60 other abductees, tired and over emotional herded into a large room clutching a bag of "stuff" to hear all about "Free Stuff" and "OpenNTF" in particular from the triumvirate of Pettitt, Schumann and Elgort. We to a geek left that session buzzing with apps, code and a desire to find an Internet connection and start downloading and in my case uploading stuff to OpenNTF. But this was 2007 and we were in Dublin, the Internet had not been invented, indeed inside toilets and wire coat hangers were still some years in the Ireland's future. So there nothing to do other than drink Guinness and had to amuse ourselves by taking notes with a thing called "A PEN" on a device called "PAPER" - oh how the world has changed. I still have the Blackberry Notepad with the phrases "Elgort" - "Rather Bald" - and "Ben Poole  is not a Wiki- Mark Myers is a Wiki" and  "OpenNTF" the latter being underlined in 3 colours with 6 exclamation marks

Bruce has had many adventures since, when you have a hour to spare buy me a pint and ask me how Bruce rescued the party of nuns from the pirates using nothing but a RESTful API, a goat, some moisturiser and a funky paradiddle.

Bruce old bean you will be a very hard act to follow! So from this one example of someone you helped get "really" going in the geek profession - thank you for your indefatigable energy and enthusiasm .. it really did and does help folk like me get off their arses and start learning!

PS. Oh and thank you for having the amazingly good taste to marry the ever lovely Gayle ;-)

Thursday 5 September 2013

OK who turned out the lights and where is the tuna?

... and here I sit for the first time in 3 days alone with my thoughts outside Brighton railway station in glorious Autumn sunshine. ICON13 is over and real life drifts around me in ill advised shorts, double entendre tee-shirts and the occasional "kiss me quick" hat. Sitting on the low wall it occurs to me that I have been told by many people that we are afforded moments of clarity when all become clear and this wibbly wobbly continuum we call home makes perfect sense - sadly this is most definitely not one of those moments.

This post is not a plea to anyone to do anything or am getting all cross and pointing the finger and attributing blame it is simply the passing observations of a willing surfer dude who has enjoyed riding the wave of geek enthusiasm that Kitty and Warren and the rest of the team have provided year on year for the last 7 years.

I have been honoured to be a small and sometimes profane cog in the Lego Technic engine Kitty and Warren have crafted these 7 years and more out of the raw materials of knowledge, dedication, enthusiasm, the deadly cat herding skills of a ninja manager and when all else fails Kitty's tablet ... like all things I have found that echo with that indefinable "something" that sets apart the really great from the mundane I did not want it to end. .... but thing do end tis the nature of things and I (indeed we) need to look to the future and ensure that Kitty's and Warren's legacy in building a conference that has during its span changed from a simple crowd of geeks into a living breathing community of colleagues and indeed lots and lots of firm friends! As I sip my extra large 4 shot mocha I wonder what will come next but tis too early to speculate -

Hey ho *sigh* as a "helper" and I use the word helper advisably, I shall miss the craic of being in a team with a purpose and a pretty good and very satisfying purpose at that and with my next sip i remember that like 100's of others I was an attendee first. Dare I say it the single more important and yes vital ingredient for UKLUG, ILUG and ICON were the attendees. I can still vividly remember the day when I wandered from a Rob McDonagh/Julian Robichaux hour of Web Agent magic into a hour of  Bill Buchan at his very best. I was stunned, I was invigorated, I was inspired, I went home buzzing with a head full of ideas that went into the admin/dev teams I was working with and promptly led us in new directions that lead to our end users getting faster and better service from the company's investment in Domino.

Those 7 years of GB LUGS did for this one attendee much much much more that just be 3 days away from the office with the chance of a few pens and memory sticks. Although I have no actual metrics to back this up I am as sure as I can be that the LUGs were the driving force that took me to the next levels year on year enabling me to make a difference in the other 362 days of my professional life and that is just so fecking awesome!. I am sure I am not alone in that particular feeling and the feeling of loss is palpable.


I am at heart an old slightly unkempt hippy and my character was formed in that era of "vibes" and aqs an attendee, speaker and putterupper/takerdowner of things at LUGs the vibe was good. Indeed if truth be told it was a top drawer, primo, organic, free range excellent vibe (Young readers please note I did resist putting "Dude" at the end of that sentence as I completed the 12 step patchouli oil addiction program in 1979 and I have no wish to go through THAT again). The vibe came from the speakers who spoke because they knew their stuff and wanted to share it just because they could.  The sponsors who stumped up the cash to run the event recognised the LUG Vibe and never pushed the commercial side to hard I think because they saw the value of talking to customers and potential customers both at the booths and then later in the bar in a more convivial friendly atmosphere.


I think the LUGs are the "Farmers Market" of the community. The LUGs are the place you go to get the "special" stuff you cannot get in the high street. Don't get me wrong there is a place for the high street with the big shops we will always need them - BUT - there is a synergy between the formal and informal events that keep things fresh, vibrant and enthused.

I would love to go Connect, sadly it is unlikely I ever will but as a distant (and envious) observer the somewhat locked down Disneyfied formality and the sheer size of the endeavour makes me wonder if I would enjoy it as I do the LUGs ... very probably ... but in a very different way. I suppose what I am trying to say is the LUGs were "ours" and I have an affinity for what I is mine and I am saddened to have lost that and I think that the community will be much much less for it if a "postLUG" event does not fill the vacuum.


So my coffee is nearly done, and in 10 minutes I shall board my train and close the door on one part of my life that has been filled with education, laughter, amazement, friendship and the very best of company - it is really quite sad.

So here I sit like Schrodinger's cat in a dark box wondering where the nice Physics Geeks have gone and who ate all the tuna. - I wonder what I will find when the box is opened?

Tuesday 27 August 2013

All change has at it's heart a moment of melancholy

.... and today more than most.

For Warren Elsmore announced that ICON formerly known as UKLUGi s bowing out after 8 years of state of the art conferencing.

The world it seems changes, moves on and re-aligns it's priorities in new and interesting ways yet these changes are always tinged with a moistness of eye and a faint longing to be returned to "the good old days" a moment of Kleenex requirement if you will.

I am honoured to have helped with UKLUG and ILUG in a small way for several years and it is without hesitation that I am marking it's demise with a standing all be it virtual round of applause, cheering, whistling and throwing my cap in the air. For not to beat around the bush the work that Kitty and Warren and the other team members put in to making each and every UKLUG event a success is considerable ... very very considerable and this hard work was echoed at BLUG. MWLUG, AUSLUG and every other LUG around the world.

It is a matter of continuing amazement to me that a few people that care can make a substantial change, Theo at BLUG, Warren and Kitty at UKLUG and Paul and Eileen at ILUG and all the other people involved with LUGs are fired up to the point they are willing to give of their time and themselves and take that quite substantial risk .. a risk taken to make a difference.

Warren and Kitty's hard work over the years has left behind an enormous crowd of people who left each and every event better than when they arrived. They left with new skills ,they left better equipped to deal with the problems they face, they left with address books bulging  with contacts of both BP sponsors and fellow attendees and most valuable of all they left inspired .... and that is one hell of a most awesome wonderful thing to have achieved! So to the other team members, speakers, sponsors, partners, attendees and most of all Warren an Kitty I salute you.

THANK YOU THANK YOU THANK YOU THANK YOU













Tuesday 29 January 2013

Who can say where the road goes, where the day goes? Only time....

Although I am not over in Orlando at Connect 13 this year the distance from my friends is all the more difficult with the news that one of the "Geek Bikers" , Kenneth Kjarbye from Denmark, had a fatal accident on the Annual Hog Ride.

Stuck in Ireland I have only my words to reach out to Kenneth's family and my friends who are at this time in a dark place they were not expecting to be not even in their worst nightmares .. <sigh> but there are no words I can think of that really do help, there are no words that wrap those in pain in the warmth of a hug, there are no words the equal of shared tears, there are no words that stop the birds of sorrow from flying over the heads of Kenneth's family and my friends.

It is sufficient to say that I and the 100's if not 1000's of community members near and far are thinking of all those affected directly and indirectly by Kenneth's tragic and untimely death. I like many others will raise a glass in the coming days and remember Kenneth fondly. For me he will be remembered as one of the "Vikings" from the LUGs and my toast will be - here is to Kenneth, fellow geek and fellow biker - good man yourself!

A sad day ... a sad day indeed.

Thursday 3 January 2013

Offical start of the Domino Charity Marathon Dander For Cash 2013

The Dander Route
A while back I mentioned that there would be a reprise of the Domino Dander for Cash this coming year.

It takes me great pleasure to announce that this year's Dander will be longer bigger and more challenging than the two that have preceeded it.

As of today it is offical, a spreadsheet has been created and accomodation is being looked at and foot care product sales have gone through the roof in certain places around the UK and USA




When I say "we" these are the brave souls that will attempt to walk the 80 miles of  "The Great Glen" in Scotland in kilts in May.

Eileen Fitzgerald, Tony Holder, Bill Buchan, Carl Tyler (and Niece) are definites and Julian Woodward may possibly join us. Frank Doherty has offered to do some of the logistics if needed. We are starting on the west side of Scotland and walking to the East from Fortwilliam in the south west up along the Caladonial Canal, along the full length of Lough Ness and then down to the city of Inverness on the East Coast.

80 miles split in 4 days of approximately 20 miles each day.

Well it needs to be a challenge otherwise you folks out there won't give us lots of your hard earned cash for the charity we are doing this for. To be blunt 80miles in 4 days is MORE than enough of a challenge for myself! So the walkers now have 20 weeks to get into shape, trim the kilts and get "fettled" as they say in Norn Iron for the task ahead.

Since we are walking in Scotland we though a local Scottish charity would be a good idea and although the actual charity is not finialised we have sort of agreed that given that Bill Buchan's village charity raft race is on the week-end we finish we will probably support the larger of the charities they help - it looks likely that the very excellent and deserving Childrens Hospice Association Scotland will be the one we will be raising monies for - watch this space for details.

Two years ago when all the cash was tallied up we raised over £3000.00 that was for 26 miles in Kilts, this time it is a good deal further AND "Big Firm Tony" will be with us. THE WHOLE WAY!!!!!!!!! So when the time comes I will be expecting you to be generous ;-) and I would think that we should be able to manage £4000.00 this year if we try extra hard.

There will hopefully be a dedicated blog on the way shortly where you can follow the preparations for the Dander and the Dander itself. I am also designing a tee-shirt for the walkers. Both the blog and tee-shirt will have spaces for any ISVs or  BPs out there that would like (for a small charitable donation) to have their logos emblasioned on blog and the manly and womanly chests of the participants.

So if you are a BP and would like to have your company associated with this kilted charitable community challenge, drop me an email and I will very gladly take any (or all) of your money :-D and do my best to plug the life out of your company name out there on the internet!

If any of the folks reading this want to join us for the walk but can't do the whole walk, feel free to come and join in for one or part of any of the days we are walking, the more the merrier again just drop me an email. and I will send you details of where we will be on what day.

There is also likely to be a big slap up meal in Inverness on the Friday to which any local Domino or indeed any geeky folks that might be about are more than welcome to come and buy us drink- again more details later.

So watch the new feeds for more details about blogs and how BPs and inviduals can help us reach our target for this year.

Spread the News  ..

THE DOMINO DANDERS ARE BACK .... AND THIS TIME IT'S SCOTTISH!

03/Jan/2013 ** Update ** Chris Coates has just confirmed he has joined the Danders!!
03/Jan/2013 ** Updaye ** Julian Woodward has moved from "Possibly" to "Almost definitely"

Tuesday 27 November 2012

Interesting thing coming in ECMAScript 6

Javascript always has been the poor cousin to all that whizz-bangery that happens on a server and as a result anything new coming down the pipe line kinda gets lost in the news-stream of super-duper server improvements.

I try were possible to keep up to speed with what's about to come along and it was with this in mind I I was casting my eye over Juriy Zaytsev's EMCAScript 6 compatibility chart and I started to notice green's appear. Some of the current or beta versions of the browsers are starting to support the new V6 changes so it shouldn't be that long until we start to see them in the wild and can start to use them in anger.
(if you are interested in the EMCAScript 6 Draft Doc you can download it here)

Of the changes coming I am piqued by the idea of

1. Modules

While there are some JS libraries that do something very similar ES6 will provide support for modules, the rational behind this is to provide for:
  • Standardized mechanism for creating libraries.
  • In-language loading of external modules (non-blocking, direct style).
  • Portability between different host environments.
  • Future-proof mechanism for platforms to introduce experimental libraries without collision.
  • No installation/registration code within modules.
As Modules are static by default (but can be dynamically reflected) this makes them far more compatible with static scoping. Modules are loaded from the filesystem or network in direct style instead of by callbacks and this means the main thread of execution is not blocked by the load.

It is hoped that Modules will be used for scoping and their implementation preserves and promote static, lexical scoping, thus avoiding the many problems of dynamic scope: programming pitfalls, malicious scope injection, poor performance and  lack of modularity. Static scoping is also necessary for checking for unbound variables at compile time.

A simple module would look like this

module orders {
export function ASP(qty,val) { return val/qty; }
export var dollar = "$";
export var pound = "£";
}


This module would then be accessed in your JS code like this :

import orders( ASP ,  pound) from orders;
alert( pound+" "+ASP(ThisOrder.Quantity,ThisOrder.Value) );


I can see instances where this will be very useful!

More details on Modules can be found here

2. Object.Observe
Object.Observe gives us the ability to watch Javascript objects and report back changes to the application, changes like properties being added, updated, removed or reconfigured. When I am building a UI frameworks I often want to provide an ability to data-bind objects in a data-model to UI elements. A key component of data-binding is to track changes to the object being bound. Today, JavaScript frameworks which provide data-binding typically create objects wrapping the real data, or require objects being data-bound to be modified to buy in to data-binding. The first case leads to increased working set and more complex user model, and the second leads to siloing of data-binding frameworks. ES6 will get around this by providing a run-time capability to observe changes to an object. Here is interesting discussion on this soon to be available new feature

3 Default Parameter Values

Default parameter values allow us to initialize parameters if they are not explicitly supplied, so you can do things like


function dspInputPanel(footer = "Steve McDonagh")
{
       ... build inputPanel Object..
       footer.innerHTML = footer;

}

So when I call dspInputPanel() with no parameters the footer object will contain Steve McDonagh
if I call dspInputPanel("Anne Other") then the footer object will contain Anne Other

4. Block Scoping
There will be 2 new declarations available for scoping data in a single block

let which is similar to the var declaration but allows you to redefine a var in the let's block scope without changing the orginal var in the scope of the function

function doInterestingStuff()
{
         var x = 5;
         var y = 6;
         let (x = x*2,y =y*3) { alert( x+y ); }     // pops up 28
         alert(x+y)                                            // pops up 11
}


const is the other declaration and is like let but is used for read-only constant declarations

5. Maps
Arrays of Name - Value pairs have been around a long time in JS and ES 6 will introduce the new Map() object with it's functions SET(), HAS(), GET() and DELETE() that makes using them even easier

var myDogs = new Map();
myDogs.set("Fido","Poodle")
myDogs.set("Rover","Collie")
myDogs.has("Fido")                                 // Returns true
myDogs.get("Fido")                                 // Returns "Poodle"
myDogs.delete("Fido")                             // Returns true when deleted
myDogs.has("Fido")                                //  Now returns false


6. Sets
SETs are basically arrays and there is a new object creator Set() with associated methods HAS(), ADD() and DELETE()

var myCustomers = new Set( ["IBM","DELL","APPLE"] )
myCustomers.has("IBM")                   // returns true
myCustomers.add("ASUS")               // adds "ASUS" to the set
myCustomer.delete("IBM")                // Removes IBM from the set
myCustomer.has("IBM")                   // now returns false

This will allow array filtering to be so much easier, consider the following where I have an Array of customer names that I want to ensure is unique this new method is much much much easier to read.

function unique( customers )
{
       var ucust = new Set();
       return customers.filter(item) {
                                     if(!ucust.has(item)) { ucust.add(item) }
                                     return true;
                                     } 
}



There are loads more changes and improvements in the spec and it seems that ES6 is targeting a 2013 full spec release but as always some browsers are already implementing individual features and it's only a matter of time before their availability is widespread.

JS it seems may be coming out of the closet in the next 6 months and may soon be considered a "proper" language :-)

Friday 23 November 2012

The new CSS3 @supports() rule is really rather cool!

As all devs know , browsers can in varying degrees be a right royal pain in the arse when it comes to standards compliance and when you throw in companies like Never Upgrade a PC Till It Breaks Inc. who are still running XP with IE6, planning your super duper new web site to support them can be fraught with problems.

Most of us are used to the idea of designing a UI that degrades into a DBA-UX  (Different But Acceptable User Experience) to do this we have to be able to work out exactly the support for each feature that use in our design and have some "alternate" view that we can switch to.

Up until now I have relied on the wonderful Modernizer.js which smooths out a lot of the inconsistencies between browsers particularly the older rust buckets that NUPCTIB Inc use.

However there is a new CSS rule that will also help you - ladies and gentlegeeks let me introduce @supports() which has the syntax

@supports <supports_condition> { /* specific rules */ }

@supports is supported in most of the current browsers but as you might expect IE has ignored it and Safari doesn't have it yet. If you do use it in your CSS and a browser loads it that does not know what @supports is.. it will ignore the enclosed block, so you can still use your normal methods.

Basically what @supports() does is , it queries the CSS engine for support of whatever it is you need and then invokes the enclosed CSS rules accordingly.

@supports (display: table-cell) { /* some table-cell css in here */ }



Will test the CSS capability for box-flex and apply the rule if it is supported
You can also use a negative test for a rule not being supported.

@supports not (display:table-cell) { /*cope with non support CSS here*/ }

... and you can string together logical NOTs and ORs!

@supports (display:table-cell) and (display:list-item) { /* CSS goes here }

I am sure you get the idea and can see the usefulness of this addition to the Designer's toolbox


 






Thursday 22 November 2012

Useful i5/OS tip - Displaying Locks on an IFS Object

I was plagued this week by an odd problem on one of our i5 boxes. I was trying to use the CPYFRMIMPF & CPYTOIMPF to pull in data from a new Japanese division that uses nothing but Japanese characters in their data. This of course means UTF-8 / Unicode data, which can be a bit of a pain to set up in a DB2/i5 data table (particularly if someone forgets to make fields something other than CCSID 65535!)

Anyway.... I could get data off the system using the CPYTOIMPF into the IFS no problem at all, DBCS to UNICODE worked like a treat and everything was well with the world ... BUT ... try as I might I could not get CPYFRMIMPF to bring the data back into the DB2 file again.

There was a rather odd CPE3025 message that  told me the input file or path did not exist (error code 3025) and yet there is was, I could open it, read it, edit and save it  and everything seemed perfect .. but time after time I got the CPE3025 error and no data was transferred. I tried all day with no sucess and eventually went home hoping that a nights sleep would clear the mind and inspiration would come in the morning.


This morning came in an did a CPYTOIMPF which worked fine, did a CPYFRMIMF .. and .. it worked perfectly with no errors
After a bit of experimenting the culprit was discovered to be the fact that I had opened the file using Operations Navigator and even though the file had been closed normally Ops Navigator holds a lock on the file until Ops Nav is closed, the net effect of this is that the file is unavailable for the CPY* command.

Part of this analysis used a rather useful but lesser known API that you can use to track locks on objects in the IFS .. the api is this :

CALL QP0FPTOS PARM(*LSTOBJREF '/ifspath/ifsfile' *FORMAT2)

You need to have the *SERVICE special authority and the api dumps the locks to a spool file.

Easy when you know how but not an obvious tool for this particular problem





Thursday 15 November 2012

Domino Dander for Dosh 2013

Gentle readers some non-techie news!

The bold Eileen Fitzgerald and myself are planning a Dander For Dosh in May next year.

Eileen and were joined by the indefatigable Carl Tyler for 2012's walk along the coast between Bray and Wicklow in Ireland. We didn't pester the life out of you because we didn't get the giving organized in time for the actual walk. This year we will get our act in gear and start demanding money with menaces in January - You have been warned!

Conscious of the fact that the world is a tad short of cash we felt that we needed a real challenge one that would stretch us physically and encourage you to part with your hard earned cash. We are still discussing our options but top of the list is "The Great Glen Way" from Fort William on the West Coast of Scotland to Inverness on the east..80+ miles at around 20 miles a day for 4 days.

As plans form there will be more posts and many many requests for cash however if anyone would like to join us for 4 days of walking the length of the Great Glen under Ben Nevis along the edge of Lough Ness and down into the "granite" city, drop me a line and I will add you to the distribution list for our detailed plans.

Eileen and I cannot guarantee you good weather, but we can guarantee you four days of eclectic conversations, beautiful views, good food and good company (as you would expect from a group of Domino Geeks) I you want to join us for one day, two or all you will be very welcome (mainly as Eileen knows all my jokes and craves new material)


So Watch this space , marvel at our foolishness and when the time comes sponsor us as much as you can!

Sunday 4 November 2012

Alternates to the evil EVAL() in javascript

I am interrupting the Design Series of post for a quick JavaScript post that comes out of a question asked on the JavaScript forum on LinkedIn about alternates to the eval() function in JavaScript.

The most pertinent reasons for not using eval() are:-

1. Security - it leaves your code open to a JS injection attack which is never a good thing
2. Debugging - no line numbers which is a PITA
3. Optimization -  as until the code to be executed is unknown, it cannot be optimized.

Sadly EVAL is way to easy to use and as a result we see it all to often in places we shouldn't really see it and this can leave the door open to some nere-do-well making a mockery of the rest of your obviously wonderful code.

So what to do to avoid using eval()? Well alternates to the the 3 main areas I have used EVAL in the past are listed below.

1. When working out a bit of an object to change for example something like this

eval('document.'+IdName+'.style.display="none"');

Now I am not suggesting anyone WOULD do this given the tools available but I have come across code like this in older applications written by people just starting out in the wonderful world of JS.

document[IdName].style.display= 'none';

or

document.getElementById(IdName).style.display='none';

are both much safer and a lot faster.

2. Getting JSON that has been returned by AJAX calls to the server into the DOM, like this snippet of jQuery is doing.

$.ajax({ url: "getdata?openagent&key=SMCD-98LJM",

              success: function(data) { eval(data),
                                        doSomethingInteresting()
                                      }
        })

This will work but a more satisfactory way would be to use the JSON data type in the ajax call

$.ajax({
  url: url,
  dataType: 'json',
  data: data,
  success: *callback*
});

alternately use the $.getJSON() jQuery shorthand function.

If you don't use jQuery (or dojo which has an equivalent) you can use the powerful JavaScript JSON object.

var myJSONData = JSON.parse(data, reviver);

Where DATA is the the JSON data string and REVIVER is an optional function that will be called for every KEY and VALUE at every level of the final resulting object. Each value will be replaced by the result of the REVIVER function. For example this can be used to change strings in the JSON text into JS Date() objects.

3. The thorny and potentially very insecure loading and execution of Code Blocks, probably the most common use of eval() function in Javascript. Anywhere you allow code as text to be passed and run in the browser is prone to attack, it is much better where possible not to do this. When you absolutely have to do it then I would say using the Function constructor is less risky than a bold eval() call.

var myCode = "... your JS code ..."
var doSomething = new Function(myCode)
doSomething.call()


This will have the same effect as EVAL() but is less prone to being found as an avenue of attack for the forces of chaos on the internet. ** Note** you can also use the constructor format below to pass parms to the code block.

var doSomething = new Function("..var1name..","..var2name..", etc .. "..myCodeString..")

On a side note but related note, when you create a normal JS function construct, the definition does not have to appear at the start of the script (though it is usually best to do so for the sake of clarity). It can even be defined after the the code that calls it. In most cases, no matter where you choose to define your function, the JavaScript engine will create the function at the start of the current scope. BUT and it is an all caps BUT if you need to construct a code block conditioned on an IF, like this
if( someThingIsTrue )
 {
 function doSomeThingWonderful.call() { .... }
 }


Mozilla based browsers will allow it, but most others do not as they will always evaluate the function, even if the condition evaluates to false. So do not try to declare functions in the way noted above. Declaring functions inside these statements is possible in all current browsers using assigned anonymous functions, so it is better to do it this way.

var doSomeThingWonderful;
if( someThingIsTrure ) {
  doSomeThingWonderful = function () { --CodeBlock A--};
} else {
  doSomeThingWonderful = function () { --CodeBlock B--};
}
doSomeThingWonderful.call()

There are other reasons you need to evaluate objects or code  but generally there are ways around most of them that whilst potential slower to process are more secure and sensible.



     

Monday 15 October 2012

Principles of Design #6 - Colour Theory, The basics


Right I could wax long an lyrical about colours and how to use them, which I have to say probably sounds odd coming from me given that I am nearly colour blind. However I have been colour blind all my life and to me the sky is blue and grass green because that is what we are taught from when we are small.

Colour is very hard to describe to someone else without using the word "like" in fact most colour names are based around a descriptor that carries with it the meaning of the colour being expressed..

For example "Cornflower Blue" should make you think of the colour of Cornflowers like the one on the left or of a room that was painted with cornflower blue paint. There is no such thing as normal colour vision, we all each and every one bring our own baggage to this quick wander down the garden path of colour theory.



Colour Theory is a BIG topic so I will only be looking at 3 specific areas in this post, areas that any web or app designer needs to have a firm grasp of if they are to produce finished code and colour schemes that are beautiful, pleasing and work within the context of the app you are developing. These topics are .
  1. The Colour Wheel
  2. Colour Harmony
  3. Colour Context
The Colour Wheel

 The colour wheel is one of those things I never see on a geek developers table or in their favourites and yet it is a tool that artists and graphic designers use daily!. Go to any art store and pick one up they will know exactly what you want is you ask for "A Colour Wheel". Alternately you can use one of the many online colour wheels .. this is one I use a lot and can recommend.
http://colorschemedesigner.com/


Color wheels are arranged so that the colours move from red at the top around the rainbow until you come again to the blues and violets at 11pm ish. You will notice that the wheel on http://colorschemedesigner.com has the words WARM and COLD at 1 and 7 o'clock. this does not mean that just the colors at these "times" are Warm or Cold but there is a transition going on from the reds,oranges and yellows which are the colours of fire, embers and the sun convey warmth where greens, turquoises and blues are the colours of grass and water traditionally cool things. But be careful it is transitional and you move from warmer to cooler in gradual steps in each colour.



A color circle, based on red, yellow and blue, is traditional in the field of artists however Sir Isaac Newton was the chap that developed the first circular diagram of colors we know of in 1666. Since then, scientists and artists have studied and designed numerous variations of this concept. Differences of opinion about the validity of one format over another continue to provoke debate. In reality, any color circle or color wheel which presents a logically arranged sequence of pure hues has merit.




Primary Colours: Red, yellow and blue
In traditional colour theory (used in paint and pigments), primary colour are the 3 pigment colours that can not be mixed or formed by any combination of other colours. All other colours are derived from these 3 hues.  When you mix these 3 colours you get the Secondary Colours Green, orange and purple. If you start mixing primary and secondary you get the Tertiary Colours Yellow-orange, red-orange, red-purple, blue-purple, blue-green & yellow-green etc. and so on. gradually as you mix the colours you get the wheel you can see at http://colorschemedesigner.com.



OK I got that colours are colours and they are can be placed on a wheel so how does that help me and why should I use a wheel at all?

Good Question - this is where the next topic comes in

Colour Harmony

In visual experiences, harmony is something that is pleasing to the eye. It engages the viewer and it creates an inner sense of order or a balance in the visual experience. When something is not harmonious, it's either boring or chaotic. At one extreme is a visual experience that is so bland that the viewer is not engaged. The human brain will reject under-stimulating information. At the other extreme is a visual experience that is so overdone, so chaotic that the viewer can't stand to look at it. The human brain rejects what it can not organize because it cannot understand it.. Creating harmony is the task designers need to get right as it delivers visual interest and a sense of order.

The Schemes!
Look again at the colour wheel

The very slime outer ring comprise the primary colors, the inner rings the secondary, terteriary and so on so how do we combine these into a scheme? Well look at the top of the site you will see what at first glance look like odd shaped buttons.

these are the 6 types of standard colour schemes that for want of a better word "work". Look at the one I have highlighted called ANALOGIC, note the 3 dark segments at the top of the circle, these represent colours that are close together on the wheel. If a colour is beside another other colour it is called Analogous (or Analogic) you see this quite a lot in nature and the human brain really quite likes it and accepts it readily and this is perhaps the easiest colour scheme to get right.
If two colours are opposite one another on the wheel they are deemed to be Complimentary and this is another of the scheme names. If you select the Complimentary button on the http://colorschemedesigner.com site you will see this.

Note the appearance of two dots one at 12 o'clock the other at 6, these are the complimentary colours if you drag the dark 12 o'clock dot around the circle the corresponding opposite dot moves with it and the colour scheme displayed on the right of the screen displays a palate of colours that work well together. Note that as you move the dot the palate colours on the left never clash, are never discordant, they "work" - and there-in lies the beauty of the colour wheel!

There are 6 schemes in total and you can explore them at your leisure however I will mention one more the Triadic which combines 3 colours on the wheel in a triangle shape. This can be hard to get right if you do not use a colour wheel!

as you can see you get the dark dot at the top, which you can move around clock wise and anti-clockwise, the two white dots at the bottom form a triangle and these can be dragged as well.. however they broaden or narrow the base of the triangle.Once again the wheel can be used to get colour schemes that work, although some may be a tad garish.. so use with caution!

you will notice that the center of the circle is the colour under the dark dot, this colour will be the one you select as the main colour of your scheme, the others will be secondary to it. hence the larger area of that colour on the right hand palate pane.

Context

How colour behaves in relation to other colors and shapes is a complex area of color theory. Compare the contrast effects of different color backgrounds for the same red square
For most people with normal or nearly normal color vision red appears more brilliant against a black background and somewhat duller against the white background. In contrast with orange, the red appears lifeless; in contrast with blue-green, it exhibits brilliance. Also notice that the red square appears larger on black than on other background colors, this is context! Always try colour swatches of your colour scheme like this to see if they 'work' the way you expect them to and deliver the sort of balance and emphasis that you want to convey.

This is also where I have a problem and http://colorschemedesigner comes to the rescue again notice a the top right there is an option for..... Colour Blind


 Have a look at your color scheme when you apply the different types of colour blindness filters and note how the tonal values change use these options in combination with the preview buttons at the bottom to see how others will see your colour scheme always remember that how you see the context of your scheme is not how others will if they have colour blindness of if there monitor is configured differently from yours. (I am Tritanopy Colour blind have a go and welcome to my world ;-) )

Next post we will go a bit deeper into the world of color and look at hue, luminance and tone

Disqus for Domi-No-Yes-Maybe