Monday, May 31, 2010

Visualizing Email Communications

Visualization of an email communication network

Social network analysis software enables the interactions and relationships between individuals or organizations to be modeled and visualized in a graphical format with individuals/organizations are represented as nodes and interactions/relationships as edges. In recent months, such software has been used extensively to model and analyze behavior in social spaces such as Facebook, Twitter etc. It seemed to us that it might also be useful in analyzing and understanding communication patterns in emails.

Emails are a key source of information in litigations (witness the publication of significant emails in the recent Goldman Sachs case) and are also monitored for compliance reasons in regulated industries. While most research of email data involves some form of keyword searching, there are occasions when it is important to understand who is in communication with whom: particularly if an investigation is at an early stage and may need to be broadened.

Understanding patterns of communication (as evidenced by email traffic) is also important when investigating why projects are failing or teams are not performing effectively. There is a substantial body of research that shows that communication issues are one of the primary reasons behind failing projects and dysfunctional teams. Analyzing and understanding the pattern of communications within a team or department can help business leaders and project managers identify where the breakdowns are occurring and target remedial action.

Gephi is an open source tool for visualizing networks (http://www.gephi.org/). It runs on Windows, Linux and Macs. While it will import files in a variety of formats (including CSV), the recommended format for importing data is .gexf – graph exchange XML format (see: http://gexf.net/format/index.html). GEXF is an XML based file format that is straightforward to generate once basic email metadata has been extracted and stored in a SQL database.

Some of the visualizations we were able to generate from an anonymized email set using this procedure are shown below. Gephi is very flexible in allowing for a range of different network representations and filtering so, for example, only highly connected individuals are shown. On the downside, it is still in alpha and, from our experience, not particularly robust. It crashed several times while we were attempting simple operations like adding text labels. While easy to import and export data, to use it effectively, some knowledge of the mathematics behind graphing and network analysis is helpful.

Visualization of Key Communicators in an Email Network


Use of Color to Show SubGroups within an Email Communication Network



Close-Up Showing Degree of Communication (as Line Thickness) between Participants in an Email Network

Tuesday, May 25, 2010

Beyond Keyword Searching

Sometimes we put documents into store for safe-keeping. We want them to be available if we should ever need them but we are not expecting to review them on a regular basis. Tax filings, expired contracts and wills would fall into this category. In a business environment though, there are many documents we need to look at on a regular basis or be able to retrieve quickly. There is nothing more frustrating than spending several hours hunting for a document you know is out there somewhere but can’t remember where it was filed and countless studies have revealed we all spend significant amounts of our working lives looking for information.

When SharePoint (and similar document management software) was first introduced, it seemed to offer a solution: behind the scenes text indexing (so users didn’t have to do anything other than upload their documents) and a really fast search engine that allowed users to retrieve documents based on the words in the text and a few key metadata fields such as title, author, folder name. However while keyword searching is very effective in extremely large, highly heterogeneous information environments like the internet as a whole (Google being a case in point – and even they modify this approach for other services such as Shopping) – it has significant limitations when looking for information in more focused environments – such as business operation – where one of the primary needs is to group together like documents and separate them from unlike documents.

Without some form of tagging, it is not straightforward to carry out even quite simple looking searches because the underlying language used to describe business concepts is not standardized. For example, the HR Department might be referred to as: HR, Human Resources and Personnel. A Project might be referred to by a project number, the client name, the project name, some abbreviation of the project name and so on. It is for this reason that most blogging software (such as this one) enables postings to be tagged/coded.
And beyond variation in terminology is the problem that no where in office documents is the purpose of the document automatically recorded. For example, there is no automatic way to distinguish a Word document that is a contract document from one that is a proposal, or an internal PowerPoint presentation from an external one produced for a client meeting. To categorize documents in this way requires human intervention and a document classification system that is agreed across the business entity.

SharePoint 2007 began to address some of the limitations of keyword searching by enabling documents to be tagged (or coded) on upload. Appropriate values for the tags/codes could be set up in lists (or for the more sophisticated, as BDC’s to a database) that would appear to users as drop down menus, or if few enough – checkboxes or radio buttons. And user compliance could be enforced by making tagging mandatory so that documents couldn’t be uploaded unless appropriate values had been selected. However, the management of this tagging could only be done at the site level, which made the enforcement of standard values and classification systems across a business entity with many site collections, let alone sites, too labor intensive.

SharePoint 2010 has extended its coding/tagging functionality in a variety of ways. It has introduced centralized coding management (aka Managed Metadata) that can be applied across an entire site collection. The Taxonomy Term Store (accessible to users with site administrator permissions) enables lists of terms to be created or imported (see figures 1 and 2 below) which can then be applied across all sites in a collection. Examples of the types of taxonomies that can be usefully managed in this way would be departments, geographic regions, project names, product names, sizes/units. Once a term list has been made available across the site collection, it can be included as a properties column in any document libraries across the entire site collection (see figure 3 below) and made available as a metadata filter for searching.

In SharePoint 2010, content administrators can also define hierarchies of Content Types that are meaningful to their business operation (e.g. Project Contracts, Financial Reports, Job Offer Letters) that can be deployed across entire Site Collections. Each Content Type can have assigned its own workflows, permissions and retention policies which are inheritable from general (e.g. Contract) to more specific types (e.g. Legal Contracts, Engineering Contracts).

The ability to centrally define and manage taxonomies and term/coding lists in SharePoint 2010 will make it much easier to manage effectively the large multi-site, multi-library document collections that now exist in many business organizations and are likely to grow further.

Friday, May 21, 2010

Using Tree Maps to Visualize Two Data Dimensions Simultaneously

Tree Maps (sometimes confusingly also known as Heat Maps) is a visualization technique in which data is represented as a series of rectangles whose dimensions are represented by color and size. It originated in displays of the values in a data matrix in which high values were represented by darker colored squares and low values by lighter squares.

Tree Maps are a useful visualization technique when exploring situations in which two variables interact or are interdependent in some way. For example, analyzing profitability by company size by state or the size and number of documents by custodian or document type.

In a logistics environment, when analyzing and ranking the opportunity presented by delivery to a set of zip codes, two dimensions of interest are: delivery area and delivery volume. The larger the delivery area, the longer the travel time, the greater the cost. The larger the volume the greater the profitability because within the limits of carrying capacity, the margin costs of additional deliveries are minimal. Zips can vary widely in land area and so the same delivery volume can represent a good business opportunity in one zip and not in another. Similarly a low delivery volume may be acceptable if the delivery area is very small (e.g. a building or a single block).

The tree map (or heat map) below shows the relationship between the area of a zip and delivery volume.


The land area of the zip is represented by the size of the individual squares: the larger the square, the larger the land area. The color of the squares represents the delivery volume: the darker the color, the greater the delivery volume. From a logistics business perspective; small very dark squares good, large light squares – bad. Using this visualization technique, it is very easy for sales and operational staff to identify which zips are likely to represent the better business proposition.

Several software packages are available which support such visualizations (see http://en.wikipedia.org/wiki/List_of_Treemapping_Software ). The one shown above was generated using LabEscape's Heat Map software (http://www.labescape.com/ ).

Tuesday, May 18, 2010

New Out-of-the-Box Search Features in Sharepoint 2010

Search has long been one of SharePoint’s strong points. It’s easy to use – simply type in a few keywords –very fast and seems to retrieve everything short of the kitchen sink. And therein also lay its weakness. Results came back as a long (often very long) list, mixing documents and folders together. If there was some form of relevance ranking going on, it wasn’t easy to spot it. And unless you built your own search interface, the out-of-the-box search function didn’t allow for any use of the SharePoint metadata users had so carefully added, let alone scoping by document metadata.

SharePoint 2010 changes all that with a slew of new search functionality and much improved results display. For example, it will now be possible to use metadata to filter document sets and to navigate through document libraries. SharePoint 2010 filtering and navigation by both user applied metadata and SharePoint content management metadata such as

SharePoint metadata (note: this is not document metadata but additional terms added by users on upload or by default through the assignment of a document to a particular folder/library) can be used to filter documents when searching or to navigate through a document library.

In addition to user-added metadata filtering, it will also be possible to filter by a small subset of document metadata such as date created, date last modified and author. Key Filtering is further supported by autocomplete functionality so users will not have to remember all the possible options (or how to spell them!)

The display of search results is much improved. No more cryptic laundry lists! Each result is now presented with longer snippets from the documents concerned i.e. it looks much more like Bing. Compare the screenshots below. The first is from SharePoint 2007 and the second from SharePoint 2010. The relevance ranking algorithms have also been enhanced which should mean that the most useful results display first.

Search Results in SharePoint2007


Search Results in SharePoint2010



Critical to winnowing down a large document set, SharePoint will now automatically display “Refinements” on the left hand side of search results which, as the name suggests, can be used to narrow down further the results. (Note: these refinements are derived from SharePoint and basic document metadata (dates, authors etc) so the more effort users and businesses put into tagging documents, the more useful this feature will be).

At the bottom of the results page, there will also be “Did You Mean” suggestions to help users with possible misspelling, acronyms etc.

And last but definitely not least, Microsoft has embraced the mobile world and made it easy to use SharePoint search features on any smart phone.

Sunday, May 16, 2010

Visualizing GeoSpatial Data

What do you do when you want to take a look at a large set of delivery address data to assess density, volume and spread? (And you don't want to spend too much time and money doing so.) The obvious first thought it to map it using Google.


It’s free, easy to use and the pins can be adapted to represent the volume of deliveries. And if you don’t want to code, you can use an excellent online service like GPS Visualizer (www.gpsvisualizer.com).

The problem is that a pin-based solution becomes too cluttered after the first couple of hundred addresses and no use at all if you have hundreds of thousands of addresses or want to get a quick overview of an entire region or state without drilling too far down to the street level.


One mapping techniques that can be used to represent density effectively is known as a Heat Map. A heat map is a graphical representation of data where the values taken by a variable in a two-dimensional map are represented as colors. The following example shows several hundred thousand delivery addresses visualized as a heat map.


Using this technique it is very easy to identify areas of heavy density. (See: http://www.heatmapapi.com/ for a free Google-based API that produces similar style visualizations).

Friday, May 14, 2010

Microsoft SharePoint 2010 : A Big Leap Forward!

Sharepoint 2010 was officially released this week. It represents a significant upgrade from Sharepoint 2007 and provides a much friendlier interface for designing and maintaining SP sites. One small enhancement speaks for the rest: no longer will adding an image to a page require a tortuous workflow and the cutting and pasting of urls. In SP 2010, images can be added to a SP page as easily as they can be to any Office document. You simply select INSERT from the ribbon, choose the picture option, browse to the image’s location and click OK. No url’s required!

Key enhancements from ChromaScope’s perspective include:

Search:
Sharepoint search has been dramatically enhanced with features designed to improve searching over large document collections including: improved relevance ranking; better result summaries so that users can more easily identify whether a document is of interest; ‘Refinements’ which automatically determined based on the document set and presented in the Left Hand column (e.g. content type, document dates, document authors and other key metadata) which can be used to navigate and filter through a set of documents. SP 2010 also has “Did You Mean” suggestions and the People Search will search for nicknames and carry out phonetic name matching. In fact there are so many useful new search features that these will be discussed in a later Chromascope post. Also available (at additional cost) is the FAST search server for those requiring enterprise wide search capabilities. FAST is scalable to billions of documents, has the ability to extract metadata for use in searching and provides thumbnail previews of office documents so that users can quickly assess relevance without actually opening the document.

Records Management, Document Retention, Preservation and Legal Hold:
In SP 2010, records management is no longer confined to sites specifically set up to manage records. Records management features – including setting policies for compliance, storage and retention – will be available across all content libraries and sites. For preservation and legal hold, documents can be declared as “records” and locked from future editing or deletion. For preservation and retention purposes, specific workflows can be designed to automatically transfer documents meeting specified criteria to a dedicated document archive.

Office Web Applications:
No longer will it be necessary to have the native applications available to view Office documents stored in Sharepoint. Once produced and uploaded to Sharepoint, they can be viewed and edited in the browser. This immediately makes using Sharepoint on a smart phone (or dare I say, iPad) a viable option as well as facilitating the use of Sharepoint for document review (no need to provide the contract attorneys with desktop copies of office!).

Managed Metadata:
In SP 2010 it is possible to set up and manage centralized taxonomies (e.g. document types, organizational departments, geographic locations, project codes) and deploy these across the entire site collection. This will make it significantly easier to code and tag documents consistently and hence easier to search and retrieve.

Scalability:
It will be possible to scale SP to handle millions of documents. Figures of up to 200 million documents per library are being quoted . This makes SP a viable repository for large scale archiving, records management and document review.

For more information see Microsoft’s own SP 2010 site: http://sharepoint.microsoft.com/
(Note: the most useful overview of the new features and functionality can be found in the two downloadable documents: Sharepoint 2010 Evaluation Guide and Sharepoint 2010 Walkthrough Guide).

All About ChromaScope

ChromaScope aims to spotlight innovation and development in the areas of information management and data analysis. Its focus will be on how these technologies can make a difference to businesses in document and data rich environments such as logistics, legal and regulatory and healthcare.  Its approach will be pragmatic and down-to-earth. Cost is always important.  IT resources are often scarce.  However cool the technology, for a business, there needs to be an ROI.  

One of the great developments of the past few years has been the availability of low to no cost technology that can be rapidly integrated into an existing systems environment and used not only to improve productivity and enhance operations but provide the business with competitive advantage.  Thinking Smart can trump Deep Pockets!  The fact that much of this technology is web-based has dramatically simplified deployment and the advent of the cloud has meant that no servers need to be owned.   

ChromaScope intends to focus on just those innovations in the areas of document management and complex data analytics and explore how they can be effectively deployed to add value to businesses.