Every now and then we feel like children outside a candy store, faces pressed to the window, eying the good things within. Today was one of those moments when we came across a reference to Palantir Technologies’ data analytics platform on on TechCrunch and went to investigate further.
Palantir is a data analysis platform which enables the integration of structured and unstructured data from a variety of sources – documents, databases, email communications – and provides the sophisticated tools required to search and analyze it. The company – Palatir Technologies (http://www.palantir.com/) - focuses on two verticals: Finance and Government with the latter accounting for 70% of their business and divided into Intelligence and Defense, Financial Regulation (Palantir is currently being used to monitor ARRA stimulus funding fraud and alert the various Inspector General’s to suspicious activity), Cybersecurity and Healthcare (e.g. tracing the origin of food poison outbreaks, correlating hospital quality indicators with medicare cost reports). Palantir has also teamed up with Thomson Reuters to develop a next generation financial analysis platform.
In order to deliver its functionality, the Palantir platform incorporates a number of different technologies. Its text search engine is based on Lucene – a java based text retrieval engine that has been around for a long time. Lucene, like most text retrieval software, operates on an inverted index i.e. it creates a list of key words (ignoring any stop words – generally words in a language that are not meaningful or, because they are so common, useful in a search – like ‘the’ or ‘a’ in English) and indexes against each term, the entire set of documents (and positions within the document) where the term occurs. One of Palantir’s customizations adjusts the retrieved results so that users can only see information they are cleared to view (a necessary requirement for some of Palantir’s national security customers). If a user doesn’t have access to a piece of information, its existence is totally suppressed and it will never appear even in a keyword count.
To test drive Palantir - go to : https://www.analyzethe.us/ and use their 'Analyze the US' application to explore public domain information about the US. The interface is easy to use, once you have adjusted to the UI metaphor, and most functions can be achieved by drag-and-drop. A set of test data is provided e.g. mortality statistics for various US hospitals. As with all data analysis systems, the challenge is knowing what questions to ask, within the context of the available data.
Palantir has one of the most easy to use geospatial analysis interfaces we’ve seen. Any group of geocodeable entities can be seen in map view by simply dragging and dropping the selection onto the Map icon. Geospatial related searches can be carried out over an area defined by radius, polygon or route. In addition, HeatMap and TreeMap geovisualizations are also supported. We did try importing some geocoded distribution data to see if we could produce a HeatMap of delivery density and were able to do so quickly and with minimum effort (see below based on Richmond VA).
Palantir would seem to be an ideal tool for use in forensic accounting and fraud investigations where there are a large number of interconnected persons of interest and organizational entities. Similarly, its ability to integrate structured data and documents might also be helpful in complex finance, fraud and IP related litigations where the legal team needs a way of analyzing and understanding a large set of both data and documents. Recent sub-prime related litigations come to mind as do complex Mergers and Acquisitions.
Friday, June 25, 2010
Wednesday, June 16, 2010
Using MapPoint 2010 for Route Analysis
As anyone who has been involved in routing of any scale knows, there are few software tools available and most of those come with a hefty price tag. If you want to create a single route, there are a range of options at various price points available. If you want to create multiple routes from a starting point of a set of addresses, or analyze a large number of routes simultaneously, options are limited and the applications available tend to have been developed for quite specific requirements which may or may not match those of the task in hand.
On a recent project, the requirement was to take a large number of predefined routes and calculate travel time and distance for each routes. Each predefined route had to be maintained as such (i.e. stops could not be transferred between routes) but to obtain more accurate times and distances we did decide to optimize the sequencing of stops within routes. This set of requirements is not what most routing applications are designed to do! Higher-end application such as ESRI’s ArcLogistics will allow you to create optimized routes from a set of delivery addresses they are not designed to support analysis of an existing set of routes.
For this assignment, the tool available was MapPoint2010. While this latest version of MapPoint has been enhanced to meet the needs of business users wanting to carry out various forms of geospatial analysis (e.g. revenue by sales territory, customer location), routing (outside of some minor upgrades such as enabling route information to be sent to GPS devices) has obviously not been a priority. MapPoint does come with an API so it is possible to engineer a bespoke application in support of a particular need but deadline constraints meant that we did not have time to pursue this approach.
Importing data into MapPoint2010 is straightforward (although it would be helpful if the data importer recognized a broader range of data types e.g. time) and it was possible to load in the data so that Route and Stop number information was preserved. However, once imported, it was not possible to use the routeID to manipulate the data. To do what we needed to do, we would have had to have imported each route individually to create separate datasets. (Note: the ability to transfer pushpins between datasets or to merge datasets does not seem to work as advertised).
Reporting and/or export to Excel of route information in MapPoint 2010 is also limited. The product seems mainly geared up to produce turn-by-turn directions which we did not need for this project. The built-in export-to-excel function allows you to export a dataset (which would have been viable if each route had been imported as a separate dataset) but there is no means to customize the export and strangely, vital route information such as distance and travel time is not included in the export – making it useless for any form of route analysis.
The solution turned out to be a third-party add on (RouteReader/RouteWriter) from Mapping Tools (www.mappingtools.com) which allowed us to select individual routes, optimize them and then output the results – including drive time and distance – to Excel. There were occasional odd results with the Route Writer arising from a particular stop being present on two different routes (the application was obviously using location information rather than routeID when outputting to Excel) but other than that the application worked well. The big “however”, however, was that each route still had to be analyzed individually. Since there were 120 routes, this took a significant amount of time. Our ideal application would have allowed us to set up batch route creation (by routeID, sequenced by either stopID or optimized) and the ability to batch output the results to Excel.
Unfortunately even RouteReader/RouteWriter could not overcome a fundamental problem with MapPoint2010 – a strange inability to geocode addresses along interstate or state highways. At first, we thought it was a naming issue: many highways have multiple “names” depending on the segment. Possibly we simply didn’t have the preferred street name for the segment of highway in question. However this was not the case. Street number level geocoding does not seem available for many highway segments in the area we were investigating (Southeast US), even though these are not new developments.
To workaround this, we had to laboriously confirm each non-identified address using Bing Maps (which is a great tool because it returns the “official” version of an address, together with the zip+4), and then force the stop back into MapPoint at the correct location using the Lat/Lon obtained from Google. And then since we could not get MapPoint to transfer pushpins between datasets, we had to manually add these “invalid” stops into their intended route before optimizing the route and reporting out to Excel. This added a considerable amount of time and effort to what was already a slow process. If only we could have routed on Bing Maps! Last but not least, if an address is incorrect (and we had several) it would be very helpful to have the opportunity in MapPoint to correct it and re-match it on the spot.
On a recent project, the requirement was to take a large number of predefined routes and calculate travel time and distance for each routes. Each predefined route had to be maintained as such (i.e. stops could not be transferred between routes) but to obtain more accurate times and distances we did decide to optimize the sequencing of stops within routes. This set of requirements is not what most routing applications are designed to do! Higher-end application such as ESRI’s ArcLogistics will allow you to create optimized routes from a set of delivery addresses they are not designed to support analysis of an existing set of routes.
For this assignment, the tool available was MapPoint2010. While this latest version of MapPoint has been enhanced to meet the needs of business users wanting to carry out various forms of geospatial analysis (e.g. revenue by sales territory, customer location), routing (outside of some minor upgrades such as enabling route information to be sent to GPS devices) has obviously not been a priority. MapPoint does come with an API so it is possible to engineer a bespoke application in support of a particular need but deadline constraints meant that we did not have time to pursue this approach.
Importing data into MapPoint2010 is straightforward (although it would be helpful if the data importer recognized a broader range of data types e.g. time) and it was possible to load in the data so that Route and Stop number information was preserved. However, once imported, it was not possible to use the routeID to manipulate the data. To do what we needed to do, we would have had to have imported each route individually to create separate datasets. (Note: the ability to transfer pushpins between datasets or to merge datasets does not seem to work as advertised).
Reporting and/or export to Excel of route information in MapPoint 2010 is also limited. The product seems mainly geared up to produce turn-by-turn directions which we did not need for this project. The built-in export-to-excel function allows you to export a dataset (which would have been viable if each route had been imported as a separate dataset) but there is no means to customize the export and strangely, vital route information such as distance and travel time is not included in the export – making it useless for any form of route analysis.
The solution turned out to be a third-party add on (RouteReader/RouteWriter) from Mapping Tools (www.mappingtools.com) which allowed us to select individual routes, optimize them and then output the results – including drive time and distance – to Excel. There were occasional odd results with the Route Writer arising from a particular stop being present on two different routes (the application was obviously using location information rather than routeID when outputting to Excel) but other than that the application worked well. The big “however”, however, was that each route still had to be analyzed individually. Since there were 120 routes, this took a significant amount of time. Our ideal application would have allowed us to set up batch route creation (by routeID, sequenced by either stopID or optimized) and the ability to batch output the results to Excel.
Unfortunately even RouteReader/RouteWriter could not overcome a fundamental problem with MapPoint2010 – a strange inability to geocode addresses along interstate or state highways. At first, we thought it was a naming issue: many highways have multiple “names” depending on the segment. Possibly we simply didn’t have the preferred street name for the segment of highway in question. However this was not the case. Street number level geocoding does not seem available for many highway segments in the area we were investigating (Southeast US), even though these are not new developments.
To workaround this, we had to laboriously confirm each non-identified address using Bing Maps (which is a great tool because it returns the “official” version of an address, together with the zip+4), and then force the stop back into MapPoint at the correct location using the Lat/Lon obtained from Google. And then since we could not get MapPoint to transfer pushpins between datasets, we had to manually add these “invalid” stops into their intended route before optimizing the route and reporting out to Excel. This added a considerable amount of time and effort to what was already a slow process. If only we could have routed on Bing Maps! Last but not least, if an address is incorrect (and we had several) it would be very helpful to have the opportunity in MapPoint to correct it and re-match it on the spot.
Sunday, June 6, 2010
Searching SharePoint 2010 with FAST
FAST is a high-end search engine that is being provided by Microsoft (at additional cost) as an enterprise level alternative to SharePoint’s built-in search engine. Whereas standard SharePoint 2010 can handle millions of documents, the FAST search engine can index and search over a hundred million i.e. it can scale to handle not only document management for an entire organization but more specialist requirements such as regulatory compliance and litigation document review. It also has extensive support for languages other than English including Chinese, Japanese and Korean.
As well as being an enterprise level search engine, FAST incorporates a number of features designed to make it easier for end users to find things. For example, many users remember documents by their visual appearance. FAST supports visual recognition by displaying a small thumbnail next to the summary of the document so users looking for a specific document can rapidly identify it. In addition FAST also includes graphical previewers for PowerPoint documents which can be used, for example, to find that one particular slide in a presentation without having to open the whole file and go through it slide by slide. Results also include links to ‘Similar Results’ and to ‘Duplicates’.
To support its search capabilities, FAST includes extremely powerful content processing based on linguistics and text analysis. Examples of linguistic processing in the item and query processing include character normalization, normalization of stemming variations and suggested spelling corrections. FAST automatically extracts document metadata such as author, date last modified, and makes them available for fielded searching, faceted search refinement and relevancy tuning. In addition to document metadata, it is also possible to define what Microsoft refer to as “managed properties”. These are categories such as organization names, place names and dates that may exist in the content of the document and can help develop or refine a search. Defining a custom extractor will enable such properties to be identified and indexed. (Note: this is a similar capability to that offered by several ‘Early Case Assessment’ tools in the litigation space).
Sharepoint 2010 Standard provides the ability to refine search results based on key metadata/properties such as document type, author, date created. These refinement metadata values are based by default on the first 50 results returned. With FAST, refinement moves to a whole other level, so-called ‘Deep’ refinement, where the refinement categories are based on managed properties within the entire result set. Users are presented with a list of refinement categories together with the counts within each category. (Note: this functionality is similar to the refinement capability that many major eCommerce sites provide e.g. NewEgg.com, BestBuy etc).
A detailed feature comparison between SharePoint2010 Standard Search and FAST is and further information about FAST is provided in Microsoft’s document “FAST Search Server 2010 for SharePoint Evaluation Guide” downloadable from http://www.microsoft.com/downloads/details.aspx?FamilyID=f1e3fb39-6959-4185-8b28-5315300b6e6b&displaylang=en
As well as being an enterprise level search engine, FAST incorporates a number of features designed to make it easier for end users to find things. For example, many users remember documents by their visual appearance. FAST supports visual recognition by displaying a small thumbnail next to the summary of the document so users looking for a specific document can rapidly identify it. In addition FAST also includes graphical previewers for PowerPoint documents which can be used, for example, to find that one particular slide in a presentation without having to open the whole file and go through it slide by slide. Results also include links to ‘Similar Results’ and to ‘Duplicates’.
Example of a FAST Results Display
Example of FAST Refinement Category List for a Results Set
Sharepoint 2010 Standard provides the ability to refine search results based on key metadata/properties such as document type, author, date created. These refinement metadata values are based by default on the first 50 results returned. With FAST, refinement moves to a whole other level, so-called ‘Deep’ refinement, where the refinement categories are based on managed properties within the entire result set. Users are presented with a list of refinement categories together with the counts within each category. (Note: this functionality is similar to the refinement capability that many major eCommerce sites provide e.g. NewEgg.com, BestBuy etc).
SharePoint 2010 with FAST : Architectural Overview
A detailed feature comparison between SharePoint2010 Standard Search and FAST is and further information about FAST is provided in Microsoft’s document “FAST Search Server 2010 for SharePoint Evaluation Guide” downloadable from http://www.microsoft.com/downloads/details.aspx?FamilyID=f1e3fb39-6959-4185-8b28-5315300b6e6b&displaylang=en
Monday, May 31, 2010
Visualizing Email Communications
Visualization of an email communication network
Social network analysis software enables the interactions and relationships between individuals or organizations to be modeled and visualized in a graphical format with individuals/organizations are represented as nodes and interactions/relationships as edges. In recent months, such software has been used extensively to model and analyze behavior in social spaces such as Facebook, Twitter etc. It seemed to us that it might also be useful in analyzing and understanding communication patterns in emails.
Emails are a key source of information in litigations (witness the publication of significant emails in the recent Goldman Sachs case) and are also monitored for compliance reasons in regulated industries. While most research of email data involves some form of keyword searching, there are occasions when it is important to understand who is in communication with whom: particularly if an investigation is at an early stage and may need to be broadened.
Understanding patterns of communication (as evidenced by email traffic) is also important when investigating why projects are failing or teams are not performing effectively. There is a substantial body of research that shows that communication issues are one of the primary reasons behind failing projects and dysfunctional teams. Analyzing and understanding the pattern of communications within a team or department can help business leaders and project managers identify where the breakdowns are occurring and target remedial action.
Gephi is an open source tool for visualizing networks (http://www.gephi.org/). It runs on Windows, Linux and Macs. While it will import files in a variety of formats (including CSV), the recommended format for importing data is .gexf – graph exchange XML format (see: http://gexf.net/format/index.html). GEXF is an XML based file format that is straightforward to generate once basic email metadata has been extracted and stored in a SQL database.
Some of the visualizations we were able to generate from an anonymized email set using this procedure are shown below. Gephi is very flexible in allowing for a range of different network representations and filtering so, for example, only highly connected individuals are shown. On the downside, it is still in alpha and, from our experience, not particularly robust. It crashed several times while we were attempting simple operations like adding text labels. While easy to import and export data, to use it effectively, some knowledge of the mathematics behind graphing and network analysis is helpful.
Visualization of Key Communicators in an Email Network
Use of Color to Show SubGroups within an Email Communication Network
Close-Up Showing Degree of Communication (as Line Thickness) between Participants in an Email Network
Tuesday, May 25, 2010
Beyond Keyword Searching
Sometimes we put documents into store for safe-keeping. We want them to be available if we should ever need them but we are not expecting to review them on a regular basis. Tax filings, expired contracts and wills would fall into this category. In a business environment though, there are many documents we need to look at on a regular basis or be able to retrieve quickly. There is nothing more frustrating than spending several hours hunting for a document you know is out there somewhere but can’t remember where it was filed and countless studies have revealed we all spend significant amounts of our working lives looking for information.
When SharePoint (and similar document management software) was first introduced, it seemed to offer a solution: behind the scenes text indexing (so users didn’t have to do anything other than upload their documents) and a really fast search engine that allowed users to retrieve documents based on the words in the text and a few key metadata fields such as title, author, folder name. However while keyword searching is very effective in extremely large, highly heterogeneous information environments like the internet as a whole (Google being a case in point – and even they modify this approach for other services such as Shopping) – it has significant limitations when looking for information in more focused environments – such as business operation – where one of the primary needs is to group together like documents and separate them from unlike documents.
Without some form of tagging, it is not straightforward to carry out even quite simple looking searches because the underlying language used to describe business concepts is not standardized. For example, the HR Department might be referred to as: HR, Human Resources and Personnel. A Project might be referred to by a project number, the client name, the project name, some abbreviation of the project name and so on. It is for this reason that most blogging software (such as this one) enables postings to be tagged/coded.
And beyond variation in terminology is the problem that no where in office documents is the purpose of the document automatically recorded. For example, there is no automatic way to distinguish a Word document that is a contract document from one that is a proposal, or an internal PowerPoint presentation from an external one produced for a client meeting. To categorize documents in this way requires human intervention and a document classification system that is agreed across the business entity.
SharePoint 2007 began to address some of the limitations of keyword searching by enabling documents to be tagged (or coded) on upload. Appropriate values for the tags/codes could be set up in lists (or for the more sophisticated, as BDC’s to a database) that would appear to users as drop down menus, or if few enough – checkboxes or radio buttons. And user compliance could be enforced by making tagging mandatory so that documents couldn’t be uploaded unless appropriate values had been selected. However, the management of this tagging could only be done at the site level, which made the enforcement of standard values and classification systems across a business entity with many site collections, let alone sites, too labor intensive.
SharePoint 2010 has extended its coding/tagging functionality in a variety of ways. It has introduced centralized coding management (aka Managed Metadata) that can be applied across an entire site collection. The Taxonomy Term Store (accessible to users with site administrator permissions) enables lists of terms to be created or imported (see figures 1 and 2 below) which can then be applied across all sites in a collection. Examples of the types of taxonomies that can be usefully managed in this way would be departments, geographic regions, project names, product names, sizes/units. Once a term list has been made available across the site collection, it can be included as a properties column in any document libraries across the entire site collection (see figure 3 below) and made available as a metadata filter for searching.
In SharePoint 2010, content administrators can also define hierarchies of Content Types that are meaningful to their business operation (e.g. Project Contracts, Financial Reports, Job Offer Letters) that can be deployed across entire Site Collections. Each Content Type can have assigned its own workflows, permissions and retention policies which are inheritable from general (e.g. Contract) to more specific types (e.g. Legal Contracts, Engineering Contracts).
The ability to centrally define and manage taxonomies and term/coding lists in SharePoint 2010 will make it much easier to manage effectively the large multi-site, multi-library document collections that now exist in many business organizations and are likely to grow further.
When SharePoint (and similar document management software) was first introduced, it seemed to offer a solution: behind the scenes text indexing (so users didn’t have to do anything other than upload their documents) and a really fast search engine that allowed users to retrieve documents based on the words in the text and a few key metadata fields such as title, author, folder name. However while keyword searching is very effective in extremely large, highly heterogeneous information environments like the internet as a whole (Google being a case in point – and even they modify this approach for other services such as Shopping) – it has significant limitations when looking for information in more focused environments – such as business operation – where one of the primary needs is to group together like documents and separate them from unlike documents.
Without some form of tagging, it is not straightforward to carry out even quite simple looking searches because the underlying language used to describe business concepts is not standardized. For example, the HR Department might be referred to as: HR, Human Resources and Personnel. A Project might be referred to by a project number, the client name, the project name, some abbreviation of the project name and so on. It is for this reason that most blogging software (such as this one) enables postings to be tagged/coded.
And beyond variation in terminology is the problem that no where in office documents is the purpose of the document automatically recorded. For example, there is no automatic way to distinguish a Word document that is a contract document from one that is a proposal, or an internal PowerPoint presentation from an external one produced for a client meeting. To categorize documents in this way requires human intervention and a document classification system that is agreed across the business entity.
SharePoint 2007 began to address some of the limitations of keyword searching by enabling documents to be tagged (or coded) on upload. Appropriate values for the tags/codes could be set up in lists (or for the more sophisticated, as BDC’s to a database) that would appear to users as drop down menus, or if few enough – checkboxes or radio buttons. And user compliance could be enforced by making tagging mandatory so that documents couldn’t be uploaded unless appropriate values had been selected. However, the management of this tagging could only be done at the site level, which made the enforcement of standard values and classification systems across a business entity with many site collections, let alone sites, too labor intensive.
SharePoint 2010 has extended its coding/tagging functionality in a variety of ways. It has introduced centralized coding management (aka Managed Metadata) that can be applied across an entire site collection. The Taxonomy Term Store (accessible to users with site administrator permissions) enables lists of terms to be created or imported (see figures 1 and 2 below) which can then be applied across all sites in a collection. Examples of the types of taxonomies that can be usefully managed in this way would be departments, geographic regions, project names, product names, sizes/units. Once a term list has been made available across the site collection, it can be included as a properties column in any document libraries across the entire site collection (see figure 3 below) and made available as a metadata filter for searching.
In SharePoint 2010, content administrators can also define hierarchies of Content Types that are meaningful to their business operation (e.g. Project Contracts, Financial Reports, Job Offer Letters) that can be deployed across entire Site Collections. Each Content Type can have assigned its own workflows, permissions and retention policies which are inheritable from general (e.g. Contract) to more specific types (e.g. Legal Contracts, Engineering Contracts).
The ability to centrally define and manage taxonomies and term/coding lists in SharePoint 2010 will make it much easier to manage effectively the large multi-site, multi-library document collections that now exist in many business organizations and are likely to grow further.
Friday, May 21, 2010
Using Tree Maps to Visualize Two Data Dimensions Simultaneously
Tree Maps (sometimes confusingly also known as Heat Maps) is a visualization technique in which data is represented as a series of rectangles whose dimensions are represented by color and size. It originated in displays of the values in a data matrix in which high values were represented by darker colored squares and low values by lighter squares.
Tree Maps are a useful visualization technique when exploring situations in which two variables interact or are interdependent in some way. For example, analyzing profitability by company size by state or the size and number of documents by custodian or document type.
In a logistics environment, when analyzing and ranking the opportunity presented by delivery to a set of zip codes, two dimensions of interest are: delivery area and delivery volume. The larger the delivery area, the longer the travel time, the greater the cost. The larger the volume the greater the profitability because within the limits of carrying capacity, the margin costs of additional deliveries are minimal. Zips can vary widely in land area and so the same delivery volume can represent a good business opportunity in one zip and not in another. Similarly a low delivery volume may be acceptable if the delivery area is very small (e.g. a building or a single block).
The tree map (or heat map) below shows the relationship between the area of a zip and delivery volume.
The land area of the zip is represented by the size of the individual squares: the larger the square, the larger the land area. The color of the squares represents the delivery volume: the darker the color, the greater the delivery volume. From a logistics business perspective; small very dark squares good, large light squares – bad. Using this visualization technique, it is very easy for sales and operational staff to identify which zips are likely to represent the better business proposition.
Several software packages are available which support such visualizations (see http://en.wikipedia.org/wiki/List_of_Treemapping_Software ). The one shown above was generated using LabEscape's Heat Map software (http://www.labescape.com/ ).
Tree Maps are a useful visualization technique when exploring situations in which two variables interact or are interdependent in some way. For example, analyzing profitability by company size by state or the size and number of documents by custodian or document type.
In a logistics environment, when analyzing and ranking the opportunity presented by delivery to a set of zip codes, two dimensions of interest are: delivery area and delivery volume. The larger the delivery area, the longer the travel time, the greater the cost. The larger the volume the greater the profitability because within the limits of carrying capacity, the margin costs of additional deliveries are minimal. Zips can vary widely in land area and so the same delivery volume can represent a good business opportunity in one zip and not in another. Similarly a low delivery volume may be acceptable if the delivery area is very small (e.g. a building or a single block).
The tree map (or heat map) below shows the relationship between the area of a zip and delivery volume.
The land area of the zip is represented by the size of the individual squares: the larger the square, the larger the land area. The color of the squares represents the delivery volume: the darker the color, the greater the delivery volume. From a logistics business perspective; small very dark squares good, large light squares – bad. Using this visualization technique, it is very easy for sales and operational staff to identify which zips are likely to represent the better business proposition.
Several software packages are available which support such visualizations (see http://en.wikipedia.org/wiki/List_of_Treemapping_Software ). The one shown above was generated using LabEscape's Heat Map software (http://www.labescape.com/ ).
Tuesday, May 18, 2010
New Out-of-the-Box Search Features in Sharepoint 2010
Search has long been one of SharePoint’s strong points. It’s easy to use – simply type in a few keywords –very fast and seems to retrieve everything short of the kitchen sink. And therein also lay its weakness. Results came back as a long (often very long) list, mixing documents and folders together. If there was some form of relevance ranking going on, it wasn’t easy to spot it. And unless you built your own search interface, the out-of-the-box search function didn’t allow for any use of the SharePoint metadata users had so carefully added, let alone scoping by document metadata.
SharePoint 2010 changes all that with a slew of new search functionality and much improved results display. For example, it will now be possible to use metadata to filter document sets and to navigate through document libraries. SharePoint 2010 filtering and navigation by both user applied metadata and SharePoint content management metadata such as
SharePoint metadata (note: this is not document metadata but additional terms added by users on upload or by default through the assignment of a document to a particular folder/library) can be used to filter documents when searching or to navigate through a document library.
In addition to user-added metadata filtering, it will also be possible to filter by a small subset of document metadata such as date created, date last modified and author. Key Filtering is further supported by autocomplete functionality so users will not have to remember all the possible options (or how to spell them!)
The display of search results is much improved. No more cryptic laundry lists! Each result is now presented with longer snippets from the documents concerned i.e. it looks much more like Bing. Compare the screenshots below. The first is from SharePoint 2007 and the second from SharePoint 2010. The relevance ranking algorithms have also been enhanced which should mean that the most useful results display first.
Search Results in SharePoint2007
Search Results in SharePoint2010
Critical to winnowing down a large document set, SharePoint will now automatically display “Refinements” on the left hand side of search results which, as the name suggests, can be used to narrow down further the results. (Note: these refinements are derived from SharePoint and basic document metadata (dates, authors etc) so the more effort users and businesses put into tagging documents, the more useful this feature will be).
At the bottom of the results page, there will also be “Did You Mean” suggestions to help users with possible misspelling, acronyms etc.
And last but definitely not least, Microsoft has embraced the mobile world and made it easy to use SharePoint search features on any smart phone.
SharePoint 2010 changes all that with a slew of new search functionality and much improved results display. For example, it will now be possible to use metadata to filter document sets and to navigate through document libraries. SharePoint 2010 filtering and navigation by both user applied metadata and SharePoint content management metadata such as
SharePoint metadata (note: this is not document metadata but additional terms added by users on upload or by default through the assignment of a document to a particular folder/library) can be used to filter documents when searching or to navigate through a document library.
In addition to user-added metadata filtering, it will also be possible to filter by a small subset of document metadata such as date created, date last modified and author. Key Filtering is further supported by autocomplete functionality so users will not have to remember all the possible options (or how to spell them!)
The display of search results is much improved. No more cryptic laundry lists! Each result is now presented with longer snippets from the documents concerned i.e. it looks much more like Bing. Compare the screenshots below. The first is from SharePoint 2007 and the second from SharePoint 2010. The relevance ranking algorithms have also been enhanced which should mean that the most useful results display first.
Search Results in SharePoint2007
Search Results in SharePoint2010
Critical to winnowing down a large document set, SharePoint will now automatically display “Refinements” on the left hand side of search results which, as the name suggests, can be used to narrow down further the results. (Note: these refinements are derived from SharePoint and basic document metadata (dates, authors etc) so the more effort users and businesses put into tagging documents, the more useful this feature will be).
At the bottom of the results page, there will also be “Did You Mean” suggestions to help users with possible misspelling, acronyms etc.
And last but definitely not least, Microsoft has embraced the mobile world and made it easy to use SharePoint search features on any smart phone.
Subscribe to:
Posts (Atom)