Enabling Enterprise Data: The journey from IT Reports to End-Use Self Servicing.
Case history and interactive discussion about moving data from individual sources to an enterprise data store, and then architecting user access while maintaining security, controls and audit-ability. How do we get IT out of the report writing business?
Featured Speaker: Brian Larson; Superior Consulting Services http://www.teamscs.com/brian-larson-bio.html
About 8 of us gathered at St, Mary’s College for this discussion. Notes in [] are the editor’s.
Many people want to get IT out of the structured reports business and let end-users build their own reports. Brian Larson does not believe that this will happen anytime soon, especially for reports required by government regulations or financial reporting laws. But, he can see a future where more reports will be tailored by users.
He broke the ad hoc reporting tools out as:
1. Dashboard architecture with drilldown capabilities
2. Highly parameterizable “super reports” where IT creates a framework and users select what records to filter out and columns to present, etc.
3. Report authoring environments (“traditional” report writing software)
4. Interactive tools for cube exploration
The classic reporting technique is now Excel Pivot Tables.
Microsoft’s vision seems to be formulating around a set of SQL data bases for OLTP, a data warehouse that collects data from the OLTP databases, a set of cubes for exploration, and an Enterprise Report Server. Brian drew these in a big box as part of the IT domain, where these are defined, and maintained by the IT professionals. The Enterprise Report Server can be shared with the User domain, where Microsoft supplies Excel pivot tables, something new to be called
PowerPivot [http://www.powerpivot.com/] , various report builders,
ReportBuilder 1.0 [http://msdn.microsoft.com/en-us/library/ms155933.aspx] , ReportBuilder 2.0 [http://www.microsoft.com/sqlserver/2008/en/us/report-builder.aspx]
ReportBuilder 3.0 [http://msdn.microsoft.com/en-us/library/dd220460(SQL.105).aspx] , and SharePoint for sharing all of these objects.
One problem with Excel as a reporting mechanism is version control and knowing whether it is accurate. There are many ways to do version control for Excel. [Afterword, I did a brief query on Excel version control in Google and got about 6 vendors right away.] Microsoft is integrating some version control/revision control in the next release as well.
Another need is to maintain reference and update counts so that IT can identify reports that are used frequently or used by senior staff to be candidates for inclusion in the IT cloud to make them more efficient and to verify reliability.
One of us pointed out that 83% is a statistic that has been reported as being the percentage of spreadsheets that have serious calculation errors. The high number of errors means that any spreadsheet used by decision makers to make large or final decisions should be validated by some IT process. It was accepted by most that many reports are just What If?-type data collection to decide if the idea has enough merit to be taken further.
One classic dilemma is the super-user, we can call him Joe, who produces spread sheets with various views of business data and sends them out to different mailing lists in the organization. Joe is popular with many people in the organization because he is giving them data to make good arguments. Joe is very happy with his position, where everyone is talking about what “Joe’s” spreadsheet shows, and he is getting a good name for himself. But, when the IT department finds out, they feel they need to validate what Joe is doing and to try to do what the decision makers need. But, of course, Joe is resistive, resulting in a stand-off.
One solution is for IT to basically hire Joe and train him to think like IT. However, this may be foreign to Joe and may take a long time. And, it requires management support. In one organization, the CIO worked with Joe’s management to get him made a logical part of IT in this role, and force him to follow IT’s processes, without moving his organization links.
If you can get Joe to put his spreadsheets in SharePoint, you can at least back them up. But, SharePoint presents other issues. It attracts many kinds of data and is not well controlled in many IT organizations. Other people said they do manage to control SharePoint fairly well, and it is better than not having it and using e-mail to route the spreadsheets. In this discussion, one person related that they had been involved in some law suits where the discovery turned up a whole pile of spreadsheets that were interim “what ifs”. These were treated by the opposition lawyers as formal commitments; a lot of legal help was required to get them disarmed. They were weapons against the company, and that may happen to any uncontrolled reports.
Microsoft is moving towards direct OLAP cube building from the OLTP systems without a data warehouse in the middle. This requires solving the data cleaning issues. Brian Larson is not an advocate of letting users use an ad hoc tool directly on the OLTP databases, mostly due to the performance implications. With the cubes and a cube exploration tool, the impact can be controlled better and isolated away from the production systems. And, the OLAP cube reflects the business logic better; you can pre-aggregate the data that is appropriate.
Brian Larson also is not an advocate of users creating relational queries with those tools, again because they have performance issues and they are much more difficult to use and control. There are performance governors that can control the impact of the queries on the production databases, but Brian was not supportive of resource limits that abort reports that run too long. That just means they get run in subsets and take even longer. [Not mentioned in the meeting was my experience with limits delivered with very competent and supportive coordinators, where people would take the failed request to the coordinator who would help them immediately optimize the queries and get their job done. If needed, the coordinators could override the limits, either once or every time for a particular query.]
Brian Larson said there are two parts to a report – data collection and presentation. Many people have very explicit and complex presentation requirements. The presentation should be entirely the user’s domain and you should let them use any tools they want to get the presentation correct. It does not have an IT impact.
Dan Nissen
Comments solicited.
Sunday, November 15, 2009
Friday, January 9, 2009
Farewell to Univac, Sperry and Unisys
My last day at Unisys was December 19, 2008. It has been my honor and privilege to have the people of Univac, Sperry and Unisys put up with me for the last 33 years. I will certainly miss all of them. I am not going far, living only 1.5 miles from the Roseville plant, and with my wife still employed at Unisys. I am looking forward to many things, and have no doubt the future is as bright as the past was. You can join me on LinkedIn, social networking for aging geeks :-) at http://www.linkedin.com/in/dannissen.
I am looking back at over 40 years being involved in the computer field. I am still in touch with some people I met in September 1968 as classmates in Computer Science 203 - Introduction to Computer Science for Majors at the University of Wisconsin-Madison. That class got me using a Burroughs B5500 and a Univac 1108, which has served me very well for the last 40 years. I spent 7 years in Mad-City, forming the 1108 Underground, and working the last three years for the computer center and attending several USE conferences as a customer. I met many Univac people at USE, and when my fiancĂ©e decided she liked Sperry Univac and wanted to move to St. Paul, my interview and hire process was very rapid. We joined Univac in 1975 and have been here ever since. I worked on Query Language Processor, and then was asked to look at relational data bases in the context of the Remote Processing System. That was a competitor to the MAPPER System. I became the supervisor of development for RPS in 1978. When MAPPER won, I moved in 1980 to the Display Processing System 1100 and worked on it for the next almost 10 years. I worked with people in Salt Lake City and Blue Bell on the VD Working Group (Visual Display). In 1984 I was asked to be the manager of DPS. In 1986, we merged with Burroughs and I learned a bit about Screen Definition Facility, and a character-based windowing system called InfoView. We lost the technology battle over Microsoft Windows vs. InfoView, and in 1990 I moved on from DPS to POSIX. The year 1990 was interesting as I was representing Unisys on the POSIX 1003.2 Commands, Shells and Utilities committee where I used my non-existent knowledge of UNIX shells to help standardize them. In 1991, I moved again, into Productivity Systems, under Dick Ulmer, and picked up Productivity Systems Workbench, the common Windows desktop platform for LINC, MAPPER, ALLY and the original release of INFOConnect terminal emulators. This was renamed Designer Workbench and I worked on that until 1993 when it moved to Australia. I moved into MAPPER 2200 and we did the ISAWS project. This little welfare eligibility application was to need $850 million worth of IX 5800, based on the starting sizing. We eventually completed over 80 weekly meetings, and we built something called Turbo-MAPPER, and we sold the State of California over $50 million in OS 2200 boxes. They quickly went on to build one of the first computing clouds, surrounding those 2200s with 30 or so Packard-Bell PCs, Unixware MAPPER and a set of runs on MAPPER 2200 that managed the use of the PCs to run the eligibility formulae. In 1996, I was asked to be the lead designer on MAPPER, and worked on the initial release of CoolICE. In early 1997, I was asked to pick up PowerClient, which was what Designer Workbench had become. I created a new group, and moved the product from Mission Viejo to Roseville. In 2001, I moved into the database group. I picked up the Database Optimizer, Programmer’s Workbench and then we started up the Eclipse IDE and Interconnect projects.
I am working hard at deciding what I am going to do in the future, and expect to share that with you soon.
I am looking back at over 40 years being involved in the computer field. I am still in touch with some people I met in September 1968 as classmates in Computer Science 203 - Introduction to Computer Science for Majors at the University of Wisconsin-Madison. That class got me using a Burroughs B5500 and a Univac 1108, which has served me very well for the last 40 years. I spent 7 years in Mad-City, forming the 1108 Underground, and working the last three years for the computer center and attending several USE conferences as a customer. I met many Univac people at USE, and when my fiancĂ©e decided she liked Sperry Univac and wanted to move to St. Paul, my interview and hire process was very rapid. We joined Univac in 1975 and have been here ever since. I worked on Query Language Processor, and then was asked to look at relational data bases in the context of the Remote Processing System. That was a competitor to the MAPPER System. I became the supervisor of development for RPS in 1978. When MAPPER won, I moved in 1980 to the Display Processing System 1100 and worked on it for the next almost 10 years. I worked with people in Salt Lake City and Blue Bell on the VD Working Group (Visual Display). In 1984 I was asked to be the manager of DPS. In 1986, we merged with Burroughs and I learned a bit about Screen Definition Facility, and a character-based windowing system called InfoView. We lost the technology battle over Microsoft Windows vs. InfoView, and in 1990 I moved on from DPS to POSIX. The year 1990 was interesting as I was representing Unisys on the POSIX 1003.2 Commands, Shells and Utilities committee where I used my non-existent knowledge of UNIX shells to help standardize them. In 1991, I moved again, into Productivity Systems, under Dick Ulmer, and picked up Productivity Systems Workbench, the common Windows desktop platform for LINC, MAPPER, ALLY and the original release of INFOConnect terminal emulators. This was renamed Designer Workbench and I worked on that until 1993 when it moved to Australia. I moved into MAPPER 2200 and we did the ISAWS project. This little welfare eligibility application was to need $850 million worth of IX 5800, based on the starting sizing. We eventually completed over 80 weekly meetings, and we built something called Turbo-MAPPER, and we sold the State of California over $50 million in OS 2200 boxes. They quickly went on to build one of the first computing clouds, surrounding those 2200s with 30 or so Packard-Bell PCs, Unixware MAPPER and a set of runs on MAPPER 2200 that managed the use of the PCs to run the eligibility formulae. In 1996, I was asked to be the lead designer on MAPPER, and worked on the initial release of CoolICE. In early 1997, I was asked to pick up PowerClient, which was what Designer Workbench had become. I created a new group, and moved the product from Mission Viejo to Roseville. In 2001, I moved into the database group. I picked up the Database Optimizer, Programmer’s Workbench and then we started up the Eclipse IDE and Interconnect projects.
I am working hard at deciding what I am going to do in the future, and expect to share that with you soon.
Subscribe to:
Posts (Atom)