Sunday, December 30, 2007

Subversion

By: Jeremy Whitlock
November 28th, 2006
Summary: This article will tell you what Subversion is from the eyes of a Subversion tools developer and consultant.

Introduction
Depending on who you ask, Subversion can be many things to many people. This article will explain, from my eyes, what Subversion is. As part of doing this, I will step into the shoes of a few key users of Subversion to explain their view of Subversion and how their view may differ from the views of others. Before we can get into the details of Subversion, lets learn exactly what Subversion is from a high-level perspective and then get more detailed information by walking in the shoes of our theoretical users.
Subversion At A Glance
Out of the box, and in its simplest form, Subversion is nothing more than an advanced, open source version control system. Its sole purpose is to help you track the changes to directories of files under version control. This isn't to say that Subversion cannot be the cornerstone of your build management, release management and continuous integration efforts, which we will discuss later, but out of the box, Subversion just cares about the directories and files it is supposed to track the changes to.
Subversion History Abridged
Back in 2000, CollabNet decided to create a replacement for CVS. This decision came after running into problems and limitations of CVS not only throughout development but also in regard to the CVS integration into their flagship product CollabNet Enterprise Edition, which is a collaboration and development platform for distributed development. CollabNet reached out to Karl Fogel, author of Open Source Development with CVS, to ask if he would like to be involved. Coincedentally, he and Jim Blandy had already started talking about this and they agreed to do so. Their plan was to create a tool that did not deviate too much from CVS's development/usage model but would fix the apparent problems of CVS. To make a long story short, Subversion was born.
Subversion Features
Now that we know what Subversion is, from a high level, and how it came about, lets look at a few of its more impressive features to get a better understanding of what Subversion brings to the table.Directory Versioning
Directory versioning is the idea of versioning a directories structure just as you do the structure/content of a versioned file. Subversion uses a vitual filesystem to allow for directory versioning and the end result is that you can track changes to directory structures just like you can the contents of files.
True Version History
True versioning allows you to copy and rename resources so that the newly created resource has its own history and is seen as a new object. Since copying and renaming resources are extremely common, true version history is a nice feature allowing you to view each object as its own entity regardless of whether the new entity was the result of a copy or rename.
Atomic Commits
Atomic commits are the concept where your commit is either entirely committed or it is not. Unlike with non-atomic commits where you can have a partial commit, atomic commits basically allow Subversion to undo any portion of the commit transaction in the event that a problem arises. This means that any interrupted commit operations do not cause any corrupt or inconsistent state in the repository.
Versioned Metadata
Versioned metadata is the ability to apply key-value tuples to a versioned object. This metadata is called a property and properties are versioned just like the objects to which they are applied.
Choice of Network Layers
Subversion's access layer has been abstracted to allow for multiple avenues when accessing a repository. This abstraction allows you to develop your own access method or you can use an existing method. This flexibility means that you can use what works instead of being forced to use a particular access model. Another layer of flexibility is Subversion's use of WebDAV allowing for repository interaction over http/https which usually poses no problem when accessing behind a firewall and/or proxy.
Consistent Data Handling
Subversion uses a binary differencing algorithm when storing version history that works the same on text files and binary files. This means that Subversion uses the same process for versioning text and binary files, Subversion stores the files/differences the same on the server regardless of file type and Subversion sends differences across the wire the same regardless of file type.
Efficient Branching and Tagging
Subversion's approach to branching/tagging that makes branching and tagging not proportional to the size of the project being branched/tagged. Subversion uses something similar to a hard-link on the server side when the branch/tag is created. This means that branching/tagging in Subversion takes a very small amount of time and storage regardless of your project's size.
Hackability
Subversion is its own project built from the ground up with a well-defined C API. This means that you can maintain, extend and integration Subversion into other projects easily. It is also worth noting that Subversion has bindings for many languages like Java, Perl and Python.
Subversion In Detail
The list of features above isn't fully comprehensive so I figured it would be a good idea to discuss Subversion in a little more detail to outline more Subversion functionality and concepts.
Automatable and Scriptable
Subversion's output is both human readable and parsable. This means that those of you wanting to automate or script any part of Subversion should have no issues doing so.
Change Sets
Subversion was built to be efficient over the wire and on the disk. To put perspective behind this statement, Subversion wants to send as little data across the wire and to store as little information on the disk. Subversion does this via change sets. Every time you create a commit, you create a change set. Each change set contains the changes required to reproduct that commit. Since Subversion doesn't do file-level versioning, change sets are Subversions way of communicating changes in between revisions. This is excellent for being efficient over the wire and on disk because this allows Subversion to send and store only what is required to reproduce the commit creating the subsequent revision. In the end, the costs are proportional to change size and not to file size.
Choice of Client
Since Subversion abstracts the access and interaction into well-defined APIs, you have your choice of using the particular Subversion client that fits your needs or environment. You can even mix-and-match which clients you use depending on your interaction needs.
Choice of Parallel Development Model
Subversion allows you the ability to pick and choose which parallel development methodology you want to use and when. This means that if you want to use the Lock-Modify-Unlock model for your binary files, so be it. If you want to use the Copy-Modify-Merge model for all non-binary files, that is great. You can even mix and match depending on your specific likes and needs.
Internationalization
Subversion was built for global consumption and this commitment is shown by its internationalized messages.

Global Revisioning
Subversion uses a global revision number as opposed to using file-level revision numbers. The concept here is that each revision contains the state of the repository as it exists for that particular revision. This allows for many of the necessary features that Subversion has implemented.
Historical Tracking
Subversion's built-in capabilities are not limited just to versioning the files/directories instructed. Subversion also comes with a a complete toolkit for analyzing the history of the files/directories under version control. Change reports, release management and many other features are at your fingertips thanks to Subversion's built in historical tracking capabilities.
Subversion In Use
We now know what Subversion is but we still haven't really considered Subversion from the eyes of its users. The next section is to look at Subversion from the eyes of Subversion users. These users are a product developer, a product manager, a release manager, a repository administrator and a network/systems administrator. We will not write a book on each but the idea is to look at Subversion from their eyes and to figure out how Subversion best accomodates them and how.
The Product Developer
A product developer is solely concerned with Subversion in the context that it historically tracks the files/directories which the developer is developing againts. Nothing more. They need to be able to locate resources, compare differences between revisions of resources and to be able to work on multiple products/releases/efforts at the same time. Subversion accomodates in that it facilitates parallel development by its design and it simplicity in interaction allows the developer to worry more about the product than the intricacies of the version control tool. To a product developer, the following are most important:
Simplicity: Each Subversion tool is extremely well document and is designed to allow for the simplest migration path from another version control tool. Another reason Subversion is simple for developers is because there are only a handful of Subversion features that a developer will need to understand to be able to do day-to-day development.
Flexibility: Developers have the ability to use whichever client that best fits their needs. This means that you can choose whatever client that makes you the most efficient. Clients are not the only level of flexibility in the eyes of a developer. Subversion users also can pick and choose which development methodology they wish when interacting with a Subversion repository. This allows development teams to build their own development processes.
Traceability: Beyond the typical interaction with the repository during development, developers also need to be able to do minor historical tracking. Whether they need to know who added a particular line of code or who deleted a file, there is a very good need for being able to get historical data from Subversion. The good thing is that Subversion's built in historical capabilities are more than enough for creating traceability for a development project. Developers are probably the easiest to please in respect to Subversion. With Subversion's efficiency over the wire, simple and document commands and the historical tracking capabilities, Subversion is an excellent candidate for a version control system in the eyes of a developer.
The Product Manager
While the product developer is mainly concerned with the simplicity of interaction with the repository, a product manager will probably want to do more historical tracking to be able to properly manage the team working on the product. The manager will also be interested in the ability to work on multiple releases of the product in parallel. (Think about working on the current release, bug fix release and a proof-of-concept release at the same time.) To a product manager, the following are the most important:
Branching: To be able to facilitate parallel development, a requirement when working on multiple releases at the same time, a product manager would be interested in Subversion's branching capabilties. Branching is the cornerstone of allowing parallel development on multiple efforts at the same time.
Traceability: Traceability is where developer and manager needs slightly overlap. Developers need traceability to be able to understand code changes and while managers need traceability, they need it for other reasons. Managers manage developers so when traceability comes to mind, I begin to think of code reviews, change reports, defect reports and release reports. Subversion can accomodate with its full features historical tracking features.
Simplicity: Most managers want to be able to manage without having to fully understand the underlying tooling. Subversion abstracts the access layer so that managers can use WebDAV clients, like Windows Web Folders, to simplify Subversion repository interaction. This coupled with highly documented commands makes a managers job easy when managing a project using Subversion for the version control system. Product managers are extremely easy to please when it comes to Subversion. They want an easy way to interact with the repository, an easy way to trace releases and developer contributions and would like to be able to manage multiple releases at the same time. Subversion makes a manager's job easy and I'm sure the manager would agree.

The Release Manager
Think of the release manager as the same as a product manager but while a product manager manages the developers making the project, a release manager manages the releases of the projects. Release managers are solely concerned with being able to work on multiple releases in parallel and being able to trace changes between releases. Here is how Subversion accomodates release managers:
Branching: As with product managers, release managers need to be able to make sure that multiple releases are being developed in parallel with cross-contaminating releases with the needs of other releases. Since branching is the only real way to facilitate parallel development in isolation, branching is a hot topic to release managers.
Tagging: Release managers need to be able to archive releases and Subversion allows you to do this with tags. A tag is basically a human-readable name given to a particular revision of a directory tree. Where tagging makes life easier for a release manager is that release managers can locate the tags directory and identify which releases have shipped without having to memorize or document the underlying revision of the directory tree to locate a release point. Releases are as simple as having a tag with the release name, like "Release 1.0".
Traceability: Traceability is something that release managers need to be able to identify what was added, removed or fixed from one release to another. Subversion's historical tracking capabilities make this simple in that you can create a change log between releases, you can create defect reports between releases (With the proper process to facilitate this.) and you can even create other more detailed reports from one release to another depending on your business needs. We are beginning to see that Subversion's historical tracking can be extremely powerful and useful. Beyond that, release managers lives are made much easier with a few convience mechanisms like tagging thanks to Subversion.
The Repository Administrator
The repository manager has one thing on his/her mind and that is repository layout and permissions. Here are the areas where the repository manager will be concerned:
Flexibility: Subversion does not require or mandate any particular repository layout. Subversion also allows you to change just about any aspect of your repository whenever you feel the need to. Want to change from a single project repository to a multi-project repository? Want to use a non-standard repository layout? Subversion allows you to make the decisions and even allows you to change your mind easily with minimal downtime and effort.
Permissions: Depending on your server configuration, a Subversion repository administrator can integrate into many external authentication schemes for repository access. Once access is granted, the administrator can even do file-level access control all via a simple text file. No difficult configurations or administrative needs to create a fully secure Subversion repository.
Backup/Recovery: Subversion's backup and recovery tools are very simple to use. Subversion's scripability makes this process extremely easy and easy to produce. Subversion was built to make things simple in all aspects and repository administration was one of them. Repository administrators have the flexibility to choose the best practice for repository layout for thier projects and can even change the repository configuration at any time thanks to Subversion's design.
The Network/Systems Administrator
Network/Systems administrators are concerned only with security to the server and the network which the server is attached. Subversion's access capabilities make their job a lot easier and here is how:
Unobtrusive: Subversion gives you the flexibility to choose which network layer to expose your repository. With this flexibility comes the ability to expose a repository without having to include network and systems administrators in most cases. Since you can access are well-configured Subversion repository via http/https, you can usually provide access to a Subversion repository from behind a corporate firewall and/or proxy without having to create access rules to open new ports and such. Subversion can usually be installed without really needing to talk to a network or system administrator thanks to its unobtrusive nature. This makes things a lot easier for implementing Subversion into your corporation securely.
Summary
As you can see, Subversion has a lot to offer to a lot of people. Out of the box, Subversion is a commercial quality version control system but Subversion's real value proposition is in the eye of the beholder. Developers will enjoy Subversion's ease of use and flexibility. Product managers will appreciate the ability for Subversion to handle multiple efforts being tracked concurrently. Release managers will welcome the ease of tracing releases. Repository managers will welcome the flexibility Subversion gives you when providing access to your repository.Regardless of how you use Subversion, there is a lot to be gained by using Subversion. Subversion was built around being simple, flexible, and powerful. Subversion provides many innovative features that gives you the flexibility and power that you will need out of your version control system.

Saturday, December 29, 2007

Human Resource Management System

Human Resource Management Systems (HRMS, EHRMS), Human Resource Information Systems (HRIS), HR Technology or also called HR modules, shape an intersection in between human resource management (HRM) and information technology. It merges HRM as a discipline and in particular its basic HR activities and processes with the information technology field, whereas the planning and programming of data processing systems evolved into standardised routines and packages of enterprise resource planning (ERP) software. On the whole, these ERP systems have their origin on software that integrates information from different applications into one universal database. The linkage of its financial and human resource modules through one database is the most important distinction to the individually and proprietary developed predecessors, which makes this software application both rigid and flexible.

The HR function's reality
All in all, the HR function is still to a large degree administrative and common to all organizations. To varying degrees, most organizations have formalised selection, evaluation, and payroll processes. Efficient and effective management of the "Human Capital" Pool (HCP) has become an increasingly imperative and complex activity to all HR professionals. The HR function consists of tracking innumerable data points on each employee, from personal histories, data, skills, capabilities, experiences to payroll records. To reduce the manual workload of these administrative activities, organizations began to electronically automate many of these processes by introducing innovative HRMS/HCM technology. Due to complexity in programming, capabilities and limited technical resources, HR executives rely on internal or external IT professionals to develop and maintain their Human Resource Management Systems (HRMS). Before the "client-server" architecture evolved in the late 1980s, every single HR automation process came largely in form of mainframe computers that could handle large amounts of data transactions. In consequence of the high capital investment necessary to purchase or program proprietary software, these internally developed HRMS were limited to medium to large organizations being able to afford internal IT capabilities. The advent of client-server HRMS authorised HR executives for the first time to take responsibility and ownership of their systems. These client-server HRMS are characteristically developed around four principal areas of HR functionalities: 1) "payroll", 2) time and labour management 3) benefits administration and 4) HR management.
The payroll module automates the pay process by gathering data on employee time and attendance, calculating various deductions and taxes, and generating periodic paycheques and employee tax reports. Data is generally fed from the human resources and time keeping modules to calculate automatic deposit and manual cheque writing capabilities. Sophisticated HCM systems can set up accounts payable transactions from employee deduction or produce garnishment cheques. The payroll module sends accounting information to the general ledger for posting subsequent to a pay cycle.
The time and labor management module applies new technology and methods (time collection devices) to cost effectively gather and evaluate employee time/work information. The most advanced modules provide broad flexibility in data collection methods, as well as labour distribution capabilities and data analysis features. This module is a key ingredient to establish organizational cost accounting capabilities.
The benefit administration module permits HR professionals to easily administer and track employee participation in benefits programs ranging from healthcare provider, insurance policy, and pension plan to profit sharing or stock option plans.
The HR management module is a component covering all other HR aspects from application to retirement. The system records basic demographic and address data, selection, training and development, capabilities and skills management, compensation planning records and other related activities. Leading edge systems provide the ability to "read" applications and enter relevant data to applicable database fields, notify employers and provide position management and position control.
Typically, HRMS/HCM technology replaces the four core HR activities by streamlining them electronically; 1) payroll, 2) time and labour management, 3) benefit administration and 4) HR management. While using the internet or corporate intranet as a communication and workflow vehicle, the HRMS/HCM technology can convert these into web-based HRMS components of the ERP system and permit to reduce transaction costs, leading to greater HR and organizational efficiency. Through employee or manager self-service (ESS or MSS), HR activities shift away from paper based processes to using self-service functionalities that benefit employees, managers and HR professionals alike. Costly and time consuming HR administrative tasks, such as travel reimbursement, personnel data change, benefits enrollment, enrollment in training classes (employee side) and to instruct a personnel action, authorise access to information for employees (manager's side) are being individually handled and permit to reduce HR transaction time, leading to HR and organizational effectiveness. Consequently, HR professionals can spend fewer resources in managing administrative HR activities and can apply freed time and resources to concentrate on strategic HR issues, which lead to business innovation.

EHRMS vendors
A wide variety of other software vendors provide various subsets of functionality. For example, basic time and attendance software packages provide employee timekeeping functionality while other vendors focus primarily on payroll processing.
Open Source EHRMS' are also available, however they still lack end-to-end processes, functionalities and integration with common or open source ERP systems.

Business Performance Management

Business performance management (BPM) is a set of processes that help organizations optimize their business performance. It is a framework for organizing, automating and analyzing business methodologies, metrics, processes and systems that drive business performance.[1]
BPM is seen as the next generation of business intelligence (BI). BPM helps businesses make efficient use of their financial, human, material and other resources.[2]

History

An early reference to non-business performance management occurs in Sun Tzu's The Art of War. Sun Tzu claims that to succeed in war, one should have full knowledge of one's own strengths and weaknesses and full knowledge of one's enemy's strengths and weaknesses. Lack of either one might result in defeat. A certain school of thought draws parallels between the challenges in business and those of war, specifically:
collecting data
discerning patterns and meaning in the data (analyzing)
responding to the resultant information
Prior to the start of the
Information Age in the late 20th century, businesses sometimes took the trouble to laboriously collect data from non-automated sources. As they lacked computing resources to properly analyze the data they often made commercial decisions primarily on the basis of intuition.
As businesses started automating more and more systems, more and more data became available. However, collection remained a challenge due to a lack of infrastructure for data exchange or due to incompatibilities between systems. Reports on the data gathered sometimes took months to generate. Such reports allowed informed long-term strategic decision-making. However, short-term tactical decision-making continued to rely on intuition.
In modern businesses, increasing standards, automation, and technologies have led to vast amounts of data becoming available.
Data warehouse technologies have set up repositories to store this data. Improved ETL and even recently Enterprise Application Integration tools have increased the speedy collecting of data. OLAP reporting technologies have allowed faster generation of new reports which analyze the data. Business intelligence has now become the art of sieving through large amounts of data, extracting useful information and turning that information into actionable knowledge.
In
1989 Howard Dresner, a research analyst at Gartner (until 2005, now Chief Strategy Officer at Hyperion Solutions Corporation), popularized "Business Intelligence" as an umbrella term to describe a set of concepts and methods to improve business decision-making by using fact-based support systems. BPM is built on a foundation of BI, but marries it to the planning and control cycle of the enterprise - with enterprise planning, consolidation and modeling capabilities. As CSO at Hyperion, Dresner has become a champion for BPM and has suggested that it is subsuming BI.
The term "BPM" is now becoming confused with "
Business Process Management", and many are converting to the term "Corporate Performance Management" or "Enterprise Performance Management".

What is BPM?
BPM involves consolidation of data from various sources, querying, and analysis of the data, and putting the results into practice.
BPM enhances processes by creating better feedback loops. Continuous and real-time reviews help to identify and eliminate problems before they grow. BPM's forecasting abilities help the company take corrective action in time to meet earnings projections. Forecasting is characterized by a high degree of predictability which is put into good use to answer what-if scenarios. BPM is useful in
risk analysis and predicting outcomes of merger and acquisition scenarios and coming up with a plan to overcome potential problems.
BPM provides
key performance indicators (KPIs) that help companies monitor efficiency of projects and employees against operational targets.

Metrics / Key Performance Indicators
For business data analysis to become a useful tool, however, it is essential that an enterprise understand its goals and objectives – essentially, that they know the direction in which they want the enterprise to progress. To help with this analysis key performance indicators (KPIs) are laid down to assess the present state of the business and to prescribe a course of action.
More and more organizations have started to speed up the availability of data. In the past, data only became available after a month or two, which did not help managers react swiftly enough. Recently, banks have tried to make data available at shorter intervals and have reduced delays. For example, for businesses which have higher operational/
credit risk loading (for example, credit cards and "wealth management"), A large multi-national bank makes KPI-related data available weekly, and sometimes offers a daily analysis of numbers. This means data usually becomes available within 24 hours, necessitating automation and the use of IT systems.
Most of the time, BPM simply means use of several financial/nonfinancial metrics/key performance indicators to assess the present state of the business and to prescribe a course of action.
Some of the areas from which top management analysis could gain knowledge by using BPM:
Customer-related numbers:
New customers acquired
Status of existing customers
Attrition of customers (including breakup by reason for attrition)
Turnover generated by segments of the Customers - these could be demographic filters.
Outstanding balances held by segments of customers and terms of payment - these could be demographic filters.
Collection of bad debts within customer relationships.
Demographic analysis of individuals (potential customers) applying to become customers, and the levels of approval, rejections and pending numbers.
Delinquency analysis of customers behind on payments.
Profitability of customers by demographic segments and segmentation of customers by profitability.
This is more an inclusive list than an exclusive one. The above more or less describes what a bank would do, but could also refer to a telephone company or similar service sector company.
What is important is:
KPI related data which is consistent and correct.
Timely availability of KPI-related data.
Information presented in a format which aids decision making
Ability to discern patterns or trends from organised information
BPM integrates the company's processes with
CRM or ERP. Companies become able to gauge customer satisfaction, control customer trends and influence shareholder value.

Application software types
People working in business intelligence have developed tools that ease the work, especially when the intelligence task involves gathering and analyzing large amounts of unstructured data.
Tool categories commonly used for business performance management include:
OLAP — Online Analytical Processing, sometimes simply called "Analytics" (based on dimensional analysis and the so-called "hypercube" or "cube")
Scorecarding, dashboarding and data visualization
Data warehouses
Document warehouses
Text mining
DM —
Data mining
BPM — Business performance management
EIS —
Executive information systems
DSS —
Decision support systems
MIS —
Management information systems
SEMS — Strategic Enterprise Management Software

Designing and implementing a business performance management programme
When implementing a BPM programme one might like to pose a number of questions and take a number of resultant decisions, such as:
Goal Alignment queries: The first step is determining what the short and medium term purpose of the programme will be. What strategic goal(s) of the organization will be addressed by the programme? What organizational mission/vision does it relate to? A hypothesis needs to be crafted that details how this initiative will eventually improve results / performance (i.e. a strategy map).
Baseline queries: Current information gathering competency needs to be assessed. Do we have the capability to monitor important sources of information? What data is being collected and how is it being stored? What are the statistical parameters of this data, e.g., how much random variation does it contain? Is this being measured?
Cost and risk queries: The financial consequences of a new BI initiative should be estimated. It is necessary to assess the cost of the present operations and the increase in costs associated with the BPM initiative? What is the risk that the initiative will fail? This risk assessment should be converted into a financial metric and included in the planning.
Customer and stakeholder queries: Determine who will benefit from the initiative and who will pay. Who has a stake in the current procedure? What kinds of customers / stakeholders will benefit directly from this initiative? Who will benefit indirectly? What are the quantitative / qualitative benefits? Is the specified initiative the best way to increase satisfaction for all kinds of customers, or is there a better way? How will customer benefits be monitored? What about employees, shareholders, and distribution channel members?
Metrics-related queries: These information requirements must be operationalized into clearly defined metrics. One must decide what metrics to use for each piece of information being gathered. Are these the best metrics? How do we know that? How many metrics need to be tracked? If this is a large number (it usually is), what kind of system can be used to track them? Are the metrics standardized, so they can be
benchmarked against performance in other organizations? What are the industry standard metrics available?
Measurement Methodology-related queries: One should establish a methodology or a procedure to determine the best (or acceptable) way of measuring the required metrics. What methods will be used, and how frequently will data be collected? Are there any industry standards for this? Is this the best way to do the measurements? How do we know that?
Results-related queries: The BPM programme should be monitored to ensure that objectives are being met. Adjustments in the programme may be necessary. The programme should be tested for accuracy,
reliability, and validity. How can it be demonstrated that the BI initiative, and not something else, contributed to a change in results? How much of the change was probably random?

Friday, December 28, 2007

Decision Suppot Systems

Making decisions concerning complex systems (e.g., the management of organizational operations, industrial processes, or investment portfolios; the command and control of military units; the control of nuclear power plants) often strains our cognitive capabilities. Even though individual interactions among a system's variables may be well understood, predicting how the system will react to an external manipulation such as a policy decision is often difficult. What will be, for example, the effect of introducing the third shift on a factory floor? One might expect that this will increase the plant's output by roughly 50%. Factors such as additional wages, machine weardown, maintenance breaks, raw material usage, supply logistics, and future demand also need to be considered, however, because they will all affect the total financial outcome of this decision. Many variables are involved in complex and often subtle interdependencies, and predicting the total outcome may be daunting.

There is a substantial amount of empirical evidence that human intuitive judgment and decision making can be far from optimal, and it deteriorates even further with complexity and stress. In many situations, the quality of decisions is important; therefore, aiding the deficiencies of human judgment and decision making has been a major focus of science throughout history. Disciplines such as statistics, economics, and operations research developed various methods for making rational choices. More recently, these methods, often enhanced by various techniques originating from information science, cognitive psychology, and artificial intelligence, have been implemented in the form of computer programs, either as stand-alone tools or as integrated computing environments for complex decision making. Such environments are often given the common name of decision support systems (DSSs). The concept of DSS is extremely broad, and its definitions vary, depending on the author's point of view. To avoid exclusion of any of the existing types of DSSs, we define them roughly as interactive computer-based systems that aid users in judgment and choice activities. Another name sometimes used as a synonym for DSS is knowledge-based systems, which refers to their attempt to formalize domain knowledge so that it is amenable to mechanized reasoning.

Decision support systems are gaining an increased popularity in various domains, including business, engineering, the military, and medicine. They are especially valuable in situations in which the amount of available information is prohibitive for the intuition of an unaided human decision maker, and in which precision and optimality are of importance. Decision support systems can aid human cognitive deficiencies by integrating various sources of information, providing intelligent access to relevant knowledge, and aiding the process of structuring decisions. They can also support choice among well-defined alternatives and build on formal approaches, such as the methods of engineering economics, operations research, statistics, and decision theory. They can also employ artificial intelligence methods to heuristically address problems that are intractable by formal techniques. Proper application of decision-making tools increases productivity, efficiency, and effectiveness, and gives many businesses a comparative advantage over their competitors, allowing them to make optimal choices for technological processes and their parameters, planning business operations, logistics, or investments.
Although it is difficult to overestimate the importance of various computer-based tools that are relevant to decision making (e.g., databases, planning software, spreadsheets), this article focuses primarily on the core of a DSS, the part that directly supports modeling decision problems and identifies best alternatives. We briefly discuss the characteristics of decision problems and how decision making can be supported by computer programs. We then cover various components of DSSs and the role that they play in decision support. We also introduce an emergent class of normative systems (i.e., DSSs based on sound theoretical principles), and in particular, decision-analytic DSSs. Finally, we review issues related to user interfaces to DSSs and stress the importance of user interfaces to the ultimate quality of decisions aided by computer programs.

Saturday, December 22, 2007

Optic Fibre Communication

Optical fiber communication
Main article: Fiber-optic communication
Optical fiber can be used as a medium for telecommunication and networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. Additionally, the light signals propagating in the fiber can be modulated at rates as high as 40 Gb/s [3], and each fiber can carry many independent channels, each by a different wavelength of light (wavelength-division multiplexing). Over short distances, such as networking within a building, fiber saves space in cable ducts because a single fiber can carry much more data than a single electrical cable. Fiber is also immune to electrical interference, which prevents cross-talk between signals in different cables and pickup of environmental noise. Also, wiretapping is more difficult compared to electrical connections, and there are concentric dual core fibers that are said to be tap-proof. Because they are non-electrical, fiber cables can bridge very high electrical potential differences and can be used in environments where explosive fumes are present, without danger of ignition.
Although fibers can be made out of transparent plastic, glass, or a combination of the two, the fibers used in long-distance telecommunications applications are always glass, because of the lower optical attenuation. Both multi-mode and single-mode fibers are used in communications, with multi-mode fiber used mostly for short distances (up to 500 m), and single-mode fiber used for longer distance links. Because of the tighter tolerances required to couple light into and between single-mode fibers (core diameter about 10 micrometers), single-mode transmitters, receivers, amplifiers and other components are generally more expensive than multi-mode components.
Fiber optic sensors
Optical fibers can be used as sensors to measure strain, temperature, pressure and other parameters. The small size and the fact that no electrical power is needed at the remote location gives the fiber optic sensor advantages to conventional electrical sensor in certain applications.
Optical fibers are used as hydrophones for seismic or SONAR applications. Hydrophone systems with more than 100 sensors per fiber cable have been developed. Hydrophone sensor systems are used by the oil industry as well as a few countries' navies. Both bottom mounted hydrophone arrays and towed streamer systems are in use. The German company Sennheiser developed a microphone working with a laser and optical fibers[4].
Optical fiber sensors for temperature and pressure have been developed for downhole measurement in oil wells. The fiber optic sensor is well suited for this environment as it is functioning at temperatures too high for semiconductor sensors (Distributed Temperature Sensing).
Another use of the optical fiber as a sensor is the optical gyroscope which is in use in the Boeing 767 and in some car models (for navigation purposes) and the use in Hydrogen microsensors.
Fiber-optic sensors have been developed to measure co-located temperature and strain simultaneously with very high accuracy[5]. This is particularly useful to acquire information from small complex structures.
Other uses of optical fibers


A frisbee illuminated by fiber optics
Fibers are widely used in illumination applications. They are used as light guides in medical and other applications where bright light needs to be shone on a target without a clear line-of-sight path. In some buildings, optical fibers are used to route sunlight from the roof to other parts of the building (see non-imaging optics). Optical fiber illumination is also used for decorative applications, including signs, art, and artificial Christmas trees. Swarovski boutiques use optical fibers to illuminate their crystal showcases from many different angles while only employing one light source. Optical fiber is an intrinsic part of the light-transmitting concrete building product, LiTraCon.
Decision Support Systems – DSS (definition)
Decision Support Systems (DSS) are a specific class of computerized information system that supports business and organizational decision-making activities. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from raw data, documents, personal knowledge, and/or business models to identify and solve problems and make decisions.
Typical information that a decision support application might gather and present would be:
Accessing all of your current information assets, including legacy and relational data sources, cubes, data warehouses, and data marts
Comparative sales figures between one week and the next
Projected revenue figures based on new product sales assumptions
The consequences of different decision alternatives, given past experience in a context that is described
Information Builders' WebFOCUS reporting software is ideally suited for building decision support systems due to its wide reach of data, interactive facilities, ad hoc reporting capabilities, quick development times, and simple Web-based deployment.
The best decision support systems include high-level summary reports or charts and allow the user to drill down for more detailed information.

Monday, December 17, 2007

Fibre Optic

Fiber-optic communication is a method of transmitting information from one place to another by sending light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and played a major role in the advent of the Information Age.

It has become so popular in communication these days because of the following advantages.

Immunity to Electromagnetic Interference

Electromagnetic Interference is a common type of noise that originates with one of the basic properties of electromagnetism. Magnetic field lines generate an electrical current as they cut across conductors. The flow of electrons in a conductor generates a magnetic field that changes with the current flow. Electromagnetic Interference does occur in coaxial cables, since current does cut across the conductor. Fiber optics are immune to this EMI since signals are transmitted as light instead of current. Thus, they can carry signals through places where EMI would block transmission.

Data Security

Magnetic fields and current induction work in two ways. They don't just generate noise in signal carrying conductors; they also let the information on the conductor to be leaked out. Fluctuations in the induced magnetic field outside a conductor carry the same information as the current passing through the conductor. Shielding the wire, as in coaxial cables can reduce the problem, but sometimes shielding can allow enough signal leak to allow tapping, which is exactly what we wouldn't want.
There are no radiated magnetic fields around optical fibers; the electromagnetic fields are confined within the fiber. That makes it impossible to tap the signal being transmitted through a fiber without cutting into the fiber. Since fiber optics do not radiate electromagnetic energy, emissions cannot be intercepted and physically tapping the fiber takes great skill to do undetected. Thus, the fiber is the most secure medium available for carrying sensitive data.

Non Conductive Cables

Metal cables can encounter other signal transmission problems because of subtle variations in electrical potential. A serious concern with outdoor cables in certain computer networks is that they can be hit by lightning, causing destruction to wires and other cables that are involved in the network
Any conductive cables can carry power surges or ground loops. Fiber optic cables can be made non-conductive by avoiding metal in their design. These kinds of cables are economical and standard for many indoor applications. Outdoor versions are more expensive since they require special strength members, but they can still be valuable in eliminating ground loops and protecting electronic equipment from surge damage.

Eliminating Spark Hazards

In some cases, transmitting signals electrically can be extremely dangerous. Most electric potentials create small sparks. The sparks ordinarily pose no danger, but can be really bad in a chemical plant or oil refinery where the air is contaminated with potentially explosive vapours. One tiny spark can create a big explosion. potential spark hazards seriously hinder data and communication in such facilities. Fiber optic cables do not produce sparks since they do not carry current.

Ease Of Installation

Increasing transmission capacity of wire cables generally makes them thicker and more rigid. Such thick cables can be difficult to install in existing buildings where they must go through walls and cable ducts. Fiber cables are easier to install since they are smaller and more flexible. They can also run along the same routes as electric cables without picking up excessive noise.
One way to simplify installation in existing buildings is to run cables through ventilation ducts. However, fire codes require that such plenum cables be made of costly fire retardant materials that emit little smoke. The advantage of fiber types is that they are smaller and hence require less of the costly fire retardant materials. The small size, lightweight and flexibility of fiber optic cables also make them easier to be used in temporary or portable installations.

High Bandwidth Over Long Distances

Fiber optics have a large capacity to carry high speed signals over longer distances without repeaters than other types of cables. The information carrying capacity increases with frequency. This however, doesn't mean that optical fiber has infinit bandwidth, but it's certainly greater than coaxial cables. This is an important factor that leads to the choice of fiber for data communications. Fiber can be added to a wire network so it can reach terminals outside its normal range.

Saturday, December 8, 2007

Information Management

The ability to effectively manage records and documents to meet regulatory requirements is a constant challenge and one that needs careful attention.
IMS showcases the leading suppliers of document, records, content and workflow management products and services that give organisations better control over information
European Research Center for Information Systems
From Wikipedia, the free encyclopedia
Jump to: navigation, search

ERCIS Buildings in Münster
The European Research Center for Information Systems (ERCIS) was founded in 2004 at the University of Münster in Münster, North Rhine-Westphalia, Germany. The objective of ERCIS is connecting research in Information systems with Business, Computer Science, Communication Sciences, Law, Management and Mathematics. The ERCIS consists of leading national and international universities and companies in the field of Information Systems.
An Information System (IS) is the system of persons, data records and activities that process the data and information in a given organization, including manual processes or automated processes. Usually the term is used erroneously as a synonym for computer-based information systems, which is only the Information technologies component of an Information System. The computer-based information systems are the field of study for Information technologies (IT); however these should hardly be treated apart from the bigger Information System that they are always involved in.

The term information system has different meanings:
In computer security, an information system is described by three objects (Aceituno, 2004):
Structure:
Repositories, which hold data permanent or temporarily, such as buffers, RAM, hard disks, cache, etc.
Interfaces, which exchange information with the non-digital world, such as keyboards, speakers, scanners, printers, etc.
Channels, which connect repositories, such as buses, cables, wireless links, etc. A Network is a set of logical or physical channels.
Behavior:
Services, which provide value to users or to other services via messages interchange.
Messages, which carries a meaning to users or services.
In geography and cartography, a geographic information system (GIS) is used to integrate, store, edit, analyze, share, and display georeferenced information. There are many applications of GIS, ranging from ecology and geology, to the social sciences.
In knowledge representation, an information system consists of three components: human, technology, organization. In this view, information is defined in terms of the three levels of semiotics. Data which can be automatically processed by the application system corresponds to the syntax-level. In the context of an individual who interprets the data they become information, which correspond to the semantic-level. Information becomes knowledge when an individual knows (understands) and evaluates the information (e.g., for a specific task). This corresponds to the pragmatic-level.
In mathematics in the area of domain theory, a Scott information system (after its inventor Dana Scott) is a mathematical structure that provides an alternative representation of Scott domains and, as a special case, algebraic lattices.
In mathematics rough set theory, an information system is an attribute-value system.
In sociology information systems are also social systems whose behavior is heavily influenced by the goals, values and beliefs of individuals and groups, as well as the performance of the technology.[1]
In systems theory, an information system is a system, automated or manual, that comprises people, machines, and/or methods organized to collect, process, transmit, and disseminate data that represent user information.
In telecommunications, an information system is any telecommunications and/or computer related equipment or interconnected system or subsystems of equipment that is used in the acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of voice and/or data, and includes software, firmware, and hardware.[2]

History of information systems
The study of information systems, originated as a sub-discipline of computer science, in an attempt to understand and rationalize the management of technology within organizations. It has matured into a major field of management, that is increasingly being emphasized as an important area of research in management studies, and is taught at all major universities and business schools in the world.
Today, Information and Information technology have become the fifth major resource available to executives for shaping an organization, alongside people, money, material and machines.[3] Many companies have created a position of Chief Information Officer (CIO) that sits on the executive board with the Chief Executive Officer (CEO), Chief Financial Officer (CFO), Chief Operating Officer (COO) and Chief Technical Officer (CTO).The CTO may also serve as CIO, and vice versa

Study of information systems
Ciborra (2002) defined the study of information systems as the study, that deals with the deployment of information technology in organizations, institutions, and society at large.[4]
Many colleges and universities, such as the Carnegie Mellon University, the University of California - Berkeley, the University of Michigan, University of Colorado, Syracuse University, George Mason University, University of Washington, George Washington University, New York University, Claremont Graduate University, the University of Toronto, Multimedia University, University of Idaho, and the University of Limerick currently offer undergraduate and graduate degrees in information systems and closely related fields.

Applications of information systems
Information systems deal with the development, use and management of an organization's IT infrastructure.
In the post-industrial, information age, the focus of companies has shifted from being product oriented to knowledge oriented, in a sense that market operators today compete on process and innovation rather than product : the emphasis has shifted from the quality and quantity of production, to the production process itself, and the services that accompany the production process.
The biggest asset of companies today, is their information, represented in people, experience, know-how, innovations (patents, copyrights, trade secrets), and for a market operator to be able to compete, he/she must have a strong information infrastructure, at the heart of which, lies the information technology infrastructure. Thus, the study of information systems, focuses on why and how technology can be put into best use to serve the information flow within an organization.

Areas of work
Information Systems has a number of different areas of work:
Information Systems Strategy
Information Systems Management
Information Systems Development
Each of which branches out into a number of sub disciplines, that overlap with other science and managerial disciplines such as computer science, pure and engineering sciences, social and behavioral sciences, and business management.

Information technology development
The IT Department partly governs the information technology development, use, application and influence on a business or corporation. A computer based information system, following a definition of Langefors[5], is:
a technologically implemented medium for recording, storing, and disseminating linguistic expressions,
as well as for drawing conclusions from such expressions.
which can be formulated as a generalized information systems design mathematical program

Friday, December 7, 2007

10 principles of effective information management

Improving information management practices is a key focus for many organisations, across both the public and private sectors.
This is being driven by a range of factors, including a need to improve the efficiency of business processes, the demands of compliance regulations and the desire to deliver new services.
In many cases, 'information management' has meant deploying new technology solutions, such as content or document management systems, data warehousing or portal applications.
These projects have a poor track record of success, and most organisations are still struggling to deliver an integrated information management environment.
Effective information management is not easy. There are many systems to integrate, a huge range of business needs to meet, and complex organisational (and cultural) issues to address.

'Information management' is an umbrella term that encompasses all the systems and processes within an organisation for the creation and use of corporate information.
In terms of technology, information management encompasses systems such as:

• web content management (CM)
• document management (DM)
• records management (RM)
• digital asset management (DAM)
• learning management systems (LM)
• learning content management systems (LCM)
• collaboration
• enterprise search
• and many more...

Information management therefore encompasses:

• people
• process
• technology
• content

Each of these must be addressed if information management projects are to succeed.

Ten principles
________________________________________
This article introduces ten key principles to ensure that information management activities are effective and successful:
1. recognise (and manage) complexity
2. focus on adoption
3. deliver tangible & visible benefits
4. prioritise according to business needs
5. take a journey of a thousand steps
6. provide strong leadership
7. mitigate risks
8. communicate extensively
9. aim to deliver a seamless user experience
10. choose the first project very carefully


Principle 1: recognise (and manage) complexity
________________________________________
Organisations are very complex environments in which to deliver concrete solutions. As outlined above, there are many challenges that need to be overcome when planning and implementing information management projects.
When confronted with this complexity, project teams often fall back upon approaches such as:

• Focusing on deploying just one technology in isolation.
• Purchasing a very large suite of applications from a single vendor, in the hope that this can be used to solve all information management problems at once.
• Rolling out rigid, standardised solutions across a whole organisation, even though individual business areas may have different needs.
• Forcing the use of a single technology system in all cases, regardless of whether it is an appropriate solution.
• Purchasing a product 'for life', even though business requirements will change over time.
• Fully centralising information management activities, to ensure that every activity is tightly controlled.

All of these approaches will fail, as they are attempting to convert a complex set of needs and problems into simple (even simplistic) solutions. The hope is that the complexity can be limited or avoided when planning and deploying solutions.
In practice, however, there is no way of avoiding the inherent complexities within organisations. New approaches to information management must therefore be found that recognise (and manage) this complexity.

Organisations must stop looking for simple approaches, and must stop believing vendors when they offer 'silver bullet' technology solutions.
Instead, successful information management is underpinned by strong leadership that defines a clear direction (principle 6). Many small activities should then be planned to address in parallel the many needs and issues (principle 5).
Risks must then be identified and mitigated throughout the project (principle 7), to ensure that organisational complexities do not prevent the delivery of effective solutions.
Information systems are only successful if they are used

Principle 2: focus on adoption
________________________________________
Information management systems are only successful if they are actually used by staff, and it is not sufficient to simply focus on installing the software centrally.
In practice, most information management systems need the active participation of staff throughout the organisation.

For example:
• Staff must save all key files into the document/records management system.
• Decentralised authors must use the content management system to regularly update the intranet.
• Lecturers must use the learning content management system to deliver e-learning packages to their students.
• Front-line staff must capture call details in the customer relationship management system.
In all these cases, the challenge is to gain sufficient adoption to ensure that required information is captured in the system. Without a critical mass of usage, corporate repositories will not contain enough information to be useful.
This presents a considerable change management challenge for information management projects. In practice, it means that projects must be carefully designed from the outset to ensure that sufficient adoption is gained.

This may include:
• Identifying the 'what's in it for me' factors for end users of the system.
• Communicating clearly to all staff the purpose and benefits of the project.
• Carefully targeting initial projects to build momentum for the project (see principle 10).
• Conducting extensive change management and cultural change activities throughout the project.
• Ensuring that the systems that are deployed are useful and usable for staff.
These are just a few of the possible approaches, and they demonstrate the wide implications of needing to gain adoption by staff.
It is not enough to deliver 'behind the scenes' fixes

Principle 3: deliver tangible & visible benefits
________________________________________
It is not enough to simply improve the management of information 'behind the scenes'. While this will deliver real benefits, it will not drive the required cultural changes, or assist with gaining adoption by staff (principle 2).
In many cases, information management projects initially focus on improving the productivity of publishers or information managers.
While these are valuable projects, they are invisible to the rest of the organisation. When challenged, it can be hard to demonstrate the return on investment of these projects, and they do little to assist project teams to gain further funding.
Instead, information management projects must always be designed so that they deliver tangible and visible benefits.
Delivering tangible benefits involves identifying concrete business needs that must be met (principle 4). This allows meaningful measurement of the impact of the projects on the operation of the organisation.
The projects should also target issues or needs that are very visible within the organisation. When solutions are delivered, the improvement should be obvious, and widely promoted throughout the organisation.
For example, improving the information available to call centre staff can have a very visible and tangible impact on customer service.
In contrast, creating a standard taxonomy for classifying information across systems is hard to quantify and rarely visible to general staff.
This is not to say that 'behind the scenes' improvements are not required, but rather that they should always be partnered with changes that deliver more visible benefits.
This also has a major impact on the choice of the initial activities conducted (principle 10).
Tackle the most urgent business needs first

Principle 4: prioritise according to business needs
________________________________________
It can be difficult to know where to start when planning information management projects.
While some organisations attempt to prioritise projects according to the 'simplicity' of the technology to be deployed, this is not a meaningful approach. In particular, this often doesn't deliver short-term benefits that are tangible and visible (principle 3).
Instead of this technology-driven approach, the planning process should be turned around entirely, to drive projects based on their ability to address business needs.
In this way, information management projects are targeted at the most urgent business needs or issues. These in turn are derived from the overall business strategy and direction for the organisation as a whole.
For example, the rate of errors in home loan applications might be identified as a strategic issue for the organisation. A new system might therefore be put in place (along with other activities) to better manage the information that supports the processing of these applications.
Alternatively, a new call centre might be in the process of being planned. Information management activities can be put in place to support the establishment of the new call centre, and the training of new staff.
Avoid 'silver bullet' solutions that promise to fix everything

Principle 5: take a journey of a thousand steps
________________________________________
There is no single application or project that will address and resolve all the information management problems of an organisation.
Where organisations look for such solutions, large and costly strategic plans are developed. Assuming the results of this strategic planning are actually delivered (which they often aren't), they usually describe a long-term vision but give few clear directions for immediate actions.
In practice, anyone looking to design the complete information management solution will be trapped by 'analysis paralysis': the inability to escape the planning process.
Organisations are simply too complex to consider all the factors when developing strategies or planning activities.
The answer is to let go of the desire for a perfectly planned approach. Instead, project teams should take a 'journey of a thousand steps'.
This approach recognises that there are hundreds (or thousands) of often small changes that are needed to improve the information management practices across an organisation. These changes will often be implemented in parallel.
While some of these changes are organisation-wide, most are actually implemented at business unit (or even team) level. When added up over time, these numerous small changes have a major impact on the organisation.
This is a very different approach to that typically taken in organisations, and it replaces a single large (centralised) project with many individual initiatives conducted by multiple teams.
While this can be challenging to coordinate and manage, this 'thousand steps' approach recognises the inherent complexity of organisations (principle 1) and is a very effective way of mitigating risks (principle 7).
It also ensures that 'quick wins' can be delivered early on (principle 3), and allows solutions to be targeted to individual business needs (principle 4).
Successful projects require strong leadership

Principle 6: provide strong leadership
________________________________________
Successful information management is about organisational and cultural change, and this can only be achieved through strong leadership.
The starting point is to create a clear vision of the desired outcomes of the information management strategy. This will describe how the organisation will operate, more than just describing how the information systems themselves will work.
Effort must then be put into generating a sufficient sense of urgency to drive the deployment and adoption of new systems and processes.
Stakeholders must also be engaged and involved in the project, to ensure that there is support at all levels in the organisation.
This focus on leadership then underpins a range of communications activities (principle 8) that ensure that the organisation has a clear understanding of the projects and the benefits they will deliver.
When projects are solely driven by the acquisition and deployment of new technology solutions, this leadership is often lacking. Without the engagement and support of key stakeholder outside the IT area, these projects often have little impact.
Apply good risk management to ensure success

Principle 7: mitigate risks
________________________________________
Due to the inherent complexity of the environment within organisations (principle 1), there are many risks in implementing information management solutions. These risks include:

• selecting an inappropriate technology solution
• time and budget overruns
• changing business requirements
• technical issues, particularly relating to integrating systems
• failure to gain adoption by staff

At the outset of planning an information management strategy, the risks should be clearly identified. An approach must then be identified for each risk, either avoiding or mitigating the risk.
Risk management approaches should then be used to plan all aspects of the project, including the activities conducted and the budget spent.
For example, a simple but effective way of mitigating risks is to spend less money. This might involve conducting pilot projects to identifying issues and potential solutions, rather than starting with enterprise-wide deployments.

Principle 8: communicate extensively
________________________________________
Extensive communication from the project team (and project sponsors) is critical for a successful information management initiative.
This communication ensures that staff have a clear understanding of the project, and the benefits it will deliver. This is a pre-requisite for achieving the required level of adoption.
With many projects happening simultaneously (principle 5), coordination becomes paramount. All project teams should devote time to work closely with each other, to ensure that activities and outcomes are aligned.
In a complex environment, it is not possible to enforce a strict command-and-control approach to management (principle 1).
Instead, a clear end point ('vision') must be created for the information management project, and communicated widely. This allows each project team to align themselves to the eventual goal, and to make informed decisions about the best approaches.
For all these reasons, the first step in an information management project should be to develop a clear communications 'message'. This should then be supported by a communications plan that describes target audiences, and methods of communication.
Project teams should also consider establishing a 'project site' on the intranet as the outset, to provide a location for planning documents, news releases, and other updates.
Staff do not understand the distinction between systems

Principle 9: aim to deliver a seamless user experience
________________________________________
Users don't understand systems. When presented with six different information systems, each containing one-sixth of what they want, they generally rely on a piece of paper instead (or ask the person next to them).
Educating staff in the purpose and use of a disparate set of information systems is difficult, and generally fruitless. The underlying goal should therefore be to deliver a seamless user experience, one that hides the systems that the information is coming from.
This is not to say that there should be one enterprise-wide system that contains all information.
There will always be a need to have multiple information systems, but the information contained within them should be presented in a human- friendly way.
In practice, this means:

• Delivering a single intranet (or equivalent) that gives access to all information and tools.
• Ensuring a consistent look-and-feel across all applications, including standard navigation and page layouts.
• Providing 'single sign-on' to all applications.

Ultimately, it also means breaking down the distinctions between applications, and delivering tools and information along task and subject lines.
For example, many organisations store HR procedures on the intranet, but require staff to log a separate 'HR self-service' application that provides a completely different menu structure and appearance.
Improving on this, leave details should be located alongside the leave form itself. In this model, the HR application becomes a background system, invisible to the user.
Care should also be taken, however, when looking to a silver-bullet solution for providing a seamless user experience. Despite the promises, portal applications do not automatically deliver this.
Instead, a better approach may be to leverage the inherent benefits of the web platform. As long as the applications all look the same, the user will be unaware that they are accessing multiple systems and servers behind the scenes.
Of course, achieving a truly seamless user experience is not a short-term goal. Plan to incrementally move towards this goal, delivering one improvement at a time.
The first project must build momentum for further work

Principle 10: choose the first project very carefully
________________________________________
The choice of the first project conducted as part of a broader information management strategy is critical. This project must be selected carefully, to ensure that it:

• demonstrates the value of the information management strategy
• builds momentum for future activities
• generates interest and enthusiasm from both end-users and stakeholders
• delivers tangible and visible benefits (principle 3)
• addresses an important or urgent business need (principle 4)
• can be clearly communicated to staff and stakeholders (principle 8)
• assists the project team in gaining further resources and support

Actions speak louder than words. The first project is the single best (and perhaps only) opportunity to set the organisation on the right path towards better information management practices and technologies.
The first project must therefore be chosen according to its ability to act as a 'catalyst' for further organisational and cultural changes.
In practice, this often involves starting with one problem or one area of the business that the organisation as a whole would be interested in, and cares about.
For example, starting by restructuring the corporate policies and procedures will generate little interest or enthusiasm. In contrast, delivering a system that greatly assists salespeople in the field would be something that could be widely promoted throughout the organisation.