A Quality Advisor White Paper by Richard E. Biehl.
Copyright 1993, Data-Oriented Quality Solutions. All rights reserved.

The following is a transcript of the presentation, TOPICS IN QUALITY INFORMATION ENGINEERING: QUALITY CONTROL ON IE PROJECTS, given to the Lincoln Information Engineering User Group at the Woodman Auditorium in Lincoln, Nebraska on Tuesday, February 9, 1993. The presentation was arranged and hosted by the Quality Assurance Institute. .......

Quality Control On Information
Engineering Projects

Specifically today, I would like to talk about quality control on Information Engineering projects. So to begin my discussion I have to highlight the distinctions to be drawn between Quality Control and Quality Assurance. Is there anyone here who would describe themselves as Quality Assurance specialists in your organizations? What have we got here... Data Administrator's, ... data administration kind of people... tool oriented kinds of people... that's basically who I'm talking to.

Quality Assurance is a function that looks at process. Quality Control is a function that looks at products. That's the distinction to be drawn. If I were to start talking to you about how to best run a project, what kinds of practices work better than others, how we should be organizing our efforts, I'd be talking about the process we use to build information systems. I'm talking about Quality Assurance. Quality Assurance represents the majority of what we do at QAI, it's our primary function.

If I talk to you about what constitutes a good system, what a data model ought to look like, what a process model ought to include, if I talk about the product that we create in information systems then I'm talking to you about Quality Control. So Quality Control is verifying that the products that we create meet or adhere to the standards that we set out.

Quality Assurance says look at the results of Quality Control and use the data out of Quality Control as the driving force improving your process. Quality Control is a source of data, it identifies defects and tells us where we have problems. Every defect is an opportunity to improve. One of our problems in Quality Assurance is that we have far too many opportunities to improve to simply attack all of them simultaneously. So we use data from our metrics program in Quality Assurance to decide which of the kinds of defects that we're seeing are the important ones that we need to go after? And we allocate our Quality Assurance resource, our improvement resource, to those areas that represent the biggest problems.

So Quality Control is a tool that we use to measure our own products. The short term benefit is that we correct the defects we're uncovering and so hopefully produce higher quality products for our customers. But the long term and more important benefit of Quality Control is that it helps us to identify weaknesses in our processes. If all we do is Quality Control with an eye toward being a police person, stopping projects with defects from moving forward we serve a purpose to our organization but we lose the long term benefit. We'll simply spend the next few years always stopping every project for exactly the same reason. The trick is to roll Quality Control into a mentality of Quality Assurance. To say find the most common problems and go out and correct the processes that cause them. The processes represent the root cause. The reason that we see the defects that we see are apparent in the processes that we use. If I go out and review a data model out of a case tool, I know it will be full of defects. The challenge is to decide what was wrong with the data modeling process that led to a model containing those kinds of defects. I need to correct those processes, not simply police the defects.

Question: So isn't Quality Control/Quality Assurance another way of saying continuous improvement? Yes. That's the goal of Quality Assurance... to continually improve our processes so that the products and services we provide to our client base continually improve, and the challenge is to weigh Quality Control through Quality Assurance. Because without Quality Control, Quality Assurance is simply running blind. We're simply changing processes arbitrarily based on someone's hunch. That's management by opinion. The challenge in the quality arena is to do quality by fact. To use Quality Control as the source of information for knowing where to attack our processes.

How many of you are looking in your organizations to get into a mode where you're trying to promulgate some kind of standards and procedures for people to use when practicing Information Engineering? Many of us find ourselves in that situation. So, I want to start this discussion by discussing two basic models that I hope you'll use in trying to practice Quality Control or Quality Assurance in your organization. They're multi-tiered models. They're conceptual tools for talking to your client base; whether that be management in your organization who doesn't quite understand where you're trying to drive them, or analysts and practitioners of data processing in your organization who are reluctant to adopt these new techniques, mostly because it's simply change, and people are afraid of change, but also because they don't understand how it fits into the big picture.

So the first framework is to understand how Information Engineering fits into the overall construct that we call process. Where does it fit? How many of your organizations have a structured methodology that all projects follow on all projects? ...a couple of tentative hands... How many of you have a methodology regardless of whether people use it? ...quite a few more hands on that one... People aren't using our methodologies. And unfortunately our methodology is the tool that we use to roll out process improvement. So I can't do any of that until I can get people using the methodology. This usually implies an education program is necessary. Sometimes, this means improving the methodology to get rid of the legitimate complaints that people have about it. But I've got to get people using processes or I can't possibly roll out process improvement. I can get some short-term benefit. I can roll out training programs on specific techniques and people will gradually improve some of their techniques, but if we're looking for those order of magnitude improvements in our organizations we have to roll out some sort of process management schema. The trick, I think, is to keep it informal, and keep it small so that we don't overwhelm the organization.

So the first thing we have to talk about, and it's kind of a word people shy away from in many organizations, is methodology. We have to ask ourselves: What is the methodology that our organization wants to use to implement information systems? And methodology is fairly stable in our industry. For the most part, you look at the long-term view of a project, we use a phase-oriented approach to building information systems. In fact, viewed in the long-term it's even a waterfall model. We tend to cascade from one phase down to the other.... requirements down to analysis down to design down to construction. I said long-term because if you look at it in the micro scale, we use things like prototyping and interactive design to kind of jump back up the waterfall. The long term trend is to move from requirements through construction to get systems implemented. That's the way our industry runs. We may not have formalized that approach in our organizations, but for the most part we have a fixed methodology as an industry. It's fairly stable over time.

What makes our organization different is how we choose to approach the work within the methodology. How we choose to approach work, how we view the world is called discipline. Within methodology we bring disciplines to bear. Information Engineering is a collection of disciplines. It's the application of a set of structured techniques to solving the problems we incur in information systems. Disciplines are how we choose to view the world. I'm a heavily data-oriented person. Give me a problem and I'll draw a data model. I hope that's most appropriate because that's what I'm always going to do. It's why we need a blend of people. Logical data design or logical data analysis is the discipline I bring to bear when I try to solve a problem. That's how I view the world. I view the world as a data model. It works as usual, there are some cases where it doesn't, but it usually does. So I'm practicing a certain discipline when I practice my profession.

The challenge to the model is to apply the discipline within the methodology. To recognize that logical data analysis is just a way of viewing the world. To practice it effectively you've got to put it into some kind of context. Methodology is our context. The data models I'll develop during a requirements phase are quite different than the data models I would expect to develop during an analysis phase or a design phase. The discipline is the same but the context in which they operate determines ultimately what I produce as an analyst. So disciplines by themselves can't be deployed. You can't simply teach people to develop data models and process models. What you're teaching them is just a way to structure their thought process not how to use those thought processes.

So whatever you decide to do in deploying Information Engineering what you're deploying is not a methodology but a set of disciplines. You've got to try to put it in the context of a methodology. If you've got a structured methodology in house, and your people are using it, you'll find that job a lot easier. But if you don't have a structured methodology in house and you simply try to deploy Information Engineering as a set of disciplines, you'll find you get very low conformance. People simply can't deal with the complexity of the Information Engineering disciplines without the structure of a methodology. So the two have to go hand in hand. Some organizations choose to solve that problem by going out and buying a methodology. They'll go out to a James Martin & Company and buy an Information Engineering Methodology or they'll go out to Ernst & Young and they'll buy a Navigator. So you can go out and buy a methodology instead of a structured approach implicitly using the Information Engineering disciplines to begin with. That has the drawback of overwhelming the organization with methodology right from the start as opposed to an incremental improvement which can be accomplished by writing your own methodology and keeping it small and keeping it simple.

There are pros and cons to both approaches. The challenge is if you want your organizations to use the disciplines of Information Engineering you've got to provide some process structure through methodology. If you don't have any today you could be looking at a year of deployment just to get to the point where you're ready to start asking people to start practicing some Information Engineering techniques. Because technique is the next level...

Within disciplines we practice techniques. And also it's techniques that we try to go out and teach people to do and ask them to do. It's the product of the techniques that we perform Quality Control on. I told you I'm a data bigot. The discipline I practice is logical data analysis, the technique I use for the most part is entity-relationship diagramming. That's the technique. It's a very common technique today. I've been teaching logical data design for a decade. When I first started teaching it I didn't teach entity-relationship diagramming, I taught Michael Jackson techniques. Many of you can recall those. The structured charting of data for logical requirements. Completely different technique. The exact same discipline. Maybe it's matured somewhat since then, but for the most part we were trying to understand the business through its information.

So the techniques change, disciplines come and go, we tend to repackage them... Information Engineering as a repackaging or a collection of disciplines is a fairly mature concept that was relatively unknown ten years ago. The terms existed, the people existed, but the ideas matured. About a ten year horizons on disciplines coming and going. There's a longer horizon, as much as 20 years, on methodology. But techniques come and go on a faster scale. Ten years ago I wouldn't have taught entity-relationship diagramming. Five years ago I would have taught it, but nobody would have wanted to learn it. Today people are dying to learn it. Five years from now its going to start to be a relic. I'll have fewer and fewer people trying to learn entity-relationship diagramming because the data base technologies will have moved on. entity-relationship diagramming is very effective if you're looking at a relational world. But as we move on in data bases to parallel processing, distributed processing, neural network processing, we're going to find entity-relationship diagramming as a technique starts to break down. It will break down in the very areas where hierarchic data diagrams broke down first. It's predictable.

If you're dealing with any kind of expert systems, if you're dealing with any kind of process control applications maybe in the manufacturing world, you'll find you already have projects that are struggling to make use of entity-relationship diagramming because it doesn't quite apply. The current alternative to entity-relationship diagramming tends to be something along the lines of state transition diagramming for data. If you go with Texas Instrument's Information Engineering Facility product line on case tool, you'll find that they support state transition diagramming for data very effectively. Because that's an effective alternative to a process control type of application where time matters more than space.

Or for expert systems applications where the dynamics of the rule base aren't such that you can proceduralize your rules. You have to be able to state them as declarative rules. So state transition diagramming works. Some of the DoD work, some of the more advanced R&D work in data processing even state transition diagrams no longer form effective tools for data diagramming. They're on to what are called neural network diagrams which I can only pretend to understand but I've seen them in the literature that they're starting to come into vogue for understanding data requirements.

So the techniques come and go. The horizon for technique is about three to seven years, so lets call it five. While the disciplines may come and go on a ten year scale the techniques tend to come and go on a five year scale. But we've crossed an important horizon there. Quality specialists tell us that to roll out an effective TQM program, to roll out a program to continuously improve your processes can take anywhere from seven to ten years. So whatever improvement we're looking to do in our organization had better not be tied to the success of any particular technique. Because by the time our improvement program is mature and really running full steam the techniques we're using today will not be the popular techniques of the time.

We have to start to recognize the rate of change in our industry. If you're going to build an improvement program in your organization try to do it at the discipline level not the technique level. Try to promote data and process modeling and the value the provide to the business. Promote Information Engineering, it will be around for a long time. Be cautious about selling your organization on the value of entity-relationship diagramming or data flow diagramming or decomposition diagramming - the currently popular techniques within Information Engineering because it's only a question of time before they pass and other activities come into place, other techniques come into vogue. So don't tie your success to the technique. Tie your success to the discipline. Sell on the value of data modeling. Sell on the value of process modeling. And then teach people to do the techniques as the current best way to receive those benefits. Tie yourself and your benefits to the discipline and then teach people to practice the techniques. Five years from now you'll be teaching them different techniques but the benefits will still be there. The years you've spent convincing management that this is a good idea will not have been lost. That's a key success factor for you so draw that line between discipline and technique. Sell above, teach below.

The fourth level of the schema, just to close out the schema, is the idea of tool. We call this the methodology discipline technique tool schema, or the M-D-T-T schema. That's what we tend to refer to it as. Within any given techniques you'll find one or more tools that will support the process. Now often when we mention tool in the context of Information Engineering the first tools that come to mind are case tools. The automation of Information Engineering techniques. I think they're the ones that people want to talk about the most.

At QAI we would totally support the approach of automating the process. Case tools are the enabler - they make Information Engineering possible. Think about the generic structure of Information Engineering. A small system, maybe a hundred function point system with maybe 20 or 30 processes, 2 or three layers in its decomposition, and therefore some 50 or 60 data flows, probably 30 or 40 places where those data flows come to rest so we've got 30 or 40 data stores and probably a half dozen external agents. Some 80 to 100 objects have to be modeled in an Information Engineering project of only a small scale.

If we're practicing Information Engineering we ought to have entity model views for every one of those, so we're talking about creating hundreds of individual models out of these techniques for even a small project. That can't be done without a case tool. That's the real value of case tools. It enables us to practice the techniques that we've been trying to sell all of this time, that we couldn't possibly do without the case tool. So the fact is a lot of people aren't using the tools for those capabilities yet, that's the real justification for the tool. There's nothing in Information Engineering we couldn't do without the case tool. The problem's that nothing without the case tool would be practical given the scope of what we're trying to do. I might take paper and pencil and draw a hundred models but I'm never going to want to rename an entity. It's not going to happen.

So it's the tools that enable the process. The tools aren't just automation. Tools are anything we invoke, and artifact that we invoke to help us get the job done. From a Quality Control standpoint the mots effective tools we can roll out are checklists.

There are three kinds of defects we talk about in the quality arena. The first kind of defect is a defect of omission. We forget to do something or we omit something that we should have done. The second kind of defect is called wrong. We do something we shouldn't do and therefore we have defects embedded in our products. And third is called extra, that's one of the more common defects. We introduce something into our analysis that really isn't needed but we stuck it in there anyway, either consciously or unconsciously, we've added functionality to our requirements that really weren't needed by our customers. Of the three kinds of defects, the most serious in our industry today is the defect of omission. 75% of our defects as an industry are defects of omission. We simply don't do the things that we ought to do. We forget. We omit. We leave it out. As an industry we've learned to talk about that problem with a very positive tilt. We don't call them defects. We call them enhancements, we call them phase two, and worst case we call them prototyping.

So we've learned to rationalize the fact that what we built the first time isn't right. And the reality is that most of those problems are simply omissions. (Audience: "And job security.") And job security. But there's far more job security in a quality process than in a low quality process as our industry is starting to see through the term called outsourcing. To the extent that we start to meet the need, outsourcing isn't the problem, outsourcing is a symptom of a problem that we're not building the right systems. We're not being perceived as value-adding organizations. We can fix that problem through higher quality.

So if omissions are our biggest problem we need tools to help us prevent omissions. And the most effective tool we find in site after site after site as an institute is the idea of a checklist. People tend to forget to do everything they're supposed to be doing. So if those people that do know what needs to be done hence a part of the process of deploying something like Information Engineering, simply write down the things we hope will get done on every project and publish that list; most of our defects go away. Because no one is intentionally omitting function from our systems as we go. We simply miss things. We need to be reminded that certain things need to happen. Some case tools are effective at automating part of that process if we use the tools properly. I can bring in a KnowledgeWare ADW and run a conservation analysis report on my models to make sure that I haven't missed anything in terms of the data flowing through the data flows. The tool works great IF I've done the correct process model and populated all of the process objects with an accurate entity model view, if I'm doing all the work then the tool works great. But it is wholly dependent upon me having done a pretty thorough job in the first place.

But a great deal of hope for what case tools and the like can do for us in terms of Quality Control. One thing Quality Control gets out of the case tool is the idea that one way to avoid the wrong defect is to have a tool, set that won't allow us to do it wrong. If the tool won't let us do it wrong then I'm in pretty good shape. And that's one of the distinguishing features among the case tools that we use to evaluate what's the best process we should be going through. I shouldn't draw a data flow directly from a data store to an external agent. I can't make that mistake in ADW. It won't let me... I'll get a little stop sign telling me that I can't do that. There's a tool helping me avoid a defect that would otherwise be embedded in my product as I move downstream. If I pick up Excelerator it will let me do that. (Audience: "Plus a lot of other things.") One of the problems with Excelerator is that it will let me do anything. But one of its strengths of Excelerator is that it will let me do anything. That's the problem. Unfortunately many management types of people that are trying to spend all these dollars on case tools are convinced that once I've got these case tools I won't need my real senior people any more. Cause I now can have junior people that I give these tools to and I can get a lot more done for a lot less money. Now unfortunately it's the opposite that tends to happen. The case tools tend to automate the easy and they tend to leave the hard stuff for us. So if anything, the trend is in the opposite direction. The more you're dependent upon case tools, the more senior your analysts have to be to get the job done. Now one way to circumvent that again is to go with checklists. Provide tools so that people that are not experts will be reminded to do the right thing at the right time. We'll come back to that in a second.

I often go into a site and people will ask me: Rick what's the best case tool? What should we buy? I never know quite how to answer that because there's a lot of different reasons for asking that kind of question. So I usually have to query them and I always use this framework to try to understand what is going on. So the basic answer to the question is: The case tool you should buy is the one that best supports the techniques you hope to use within the disciplines you've got in your methodology. So to reverse that, you ask me what tool to buy and I'm going to ask you what methodology you're using. Once I know the methodology you're using I'll know what disciplines you're hoping to practice. Then I can talk to you about what kind of systems you build and help you to select the best techniques to use within those disciplines. It's only after I've done all of this that we can start to talk about tools. The problem is that the best tool you need is usually pieces of a lot of available products. That's the way case tools should be selected. That's not the way case tools tend to be selected.

What I usually find out is that someone says: No, no Rick. That's a really nice discussion but the reason I'm asking you is that we just bought this one. And now we have to make sure that our methodology is consistent with it. That's the bottom up approach to the framework. I've never seen it work. I've never seen it work. The acquisition of the case tool to drive the process of building a effective process in an organization. The only organizations that have any hope of making that work are organizations that are already heavily Information Engineering oriented in their disciplines, but they just haven't bothered to write it down yet. If you tend to be Information Engineering oriented you can deploy case tools to see some incremental improvement. But if you're not Information Engineering oriented and you try to become so by rolling out a case tool you buy a lot of shelfware. There are far more case tool licenses not being used than there are licenses being used. And most of them not being used are those in organizations that bought the tool before thinking about process.

But again, it takes time to deploy a process. It can take one to four years to effectively deploy a methodology and its associated disciplines. That's time. And a lot of us don't perceive that we have that kind of time. But again, through appropriate Quality Control we can start to address that. With good Quality Control in place we decrease the risk of proceeding down this Information Engineering case path without having everything in line that we ought to have in line. Because there's really two kinds of challenges going on. We're trying to improve our processes and we're trying to improve the quality of our products. Quality Control helps assure that while our processes are in disarray, which is what they may be for the next few years as we try to do this, we've at least created a gate to prevent major defects from leaking out to our customers and users. So we may be out of control, but that's not going to impact our customers. You see the textbook approach is first fix your process and then deploy them to create products, in which case you wouldn't need that gate because you'd have sound processes in place. That's a long time. That's a seven to ten year timeframe before you really start to see that benefit. It's the right way to go, and ultimately it costs less, in a textbook kind of approach.

But very few management teams are willing to wait that long. They say: Well we created a Quality Assurance group last year, why haven't we seen any changes? So the challenge is how do we deploy Quality Control in such a way that we manage that risk. That's the way I'd like you to view Quality Control for Information Engineering. It's something that should eventually be able to go away because we have sound Information Engineering processes built into our methodology. Because we don't have those today let's trade some control that will prevent major defects from leaking into production as a result of trying to change our processes ad hoc. It's OK to change processes ad hoc as long as you've provided those kinds of controls and try to realize that the very people we're asking to practice these techniques will be under increased stress. Now we need to offer support to those people who are trying to do Information Engineering.

That brings us to the second model that I want to talk about. Because you can't do effective Quality Control without standards. And you can't have effective standards without a mission. How many of you are from an IS organization that have a written mission statement that everyone is aware of? some... many... That's something that is changing. A few years ago I would have seen no hands or one, and he was lying. Today, we have a lot of organizations moving towards building some kind of mission statement. Think about the mission statement, those of you that have one. Do your mission statements describe why you exist or what you do? That's the text of a true mission statement. Mission statements should answer the question: Why do we exist, not What do we do. If your mission is a what you don't have any foundation to justify it. You're not in the business of building systems. You're in the business of helping the business to use information technology for its respective needs. What you do is you build systems toward that end. Why? Because that's the skill set you bring to bear. But why you exist is because your organizations need to make more effective use of information.

A book that I would highly recommend on mission statements, because many of you are from groups that have a title like data administration or something like that. If you have staff functions in your organization, look for a book by Peter Drucker called Managing The Non-Profit Organization. It's about a year old, maybe a year and a half old. Because what he characterizes in that book is how, basically how to manage a non-profit - an organization that from year to year doesn't know if its got resource next year, nobody feels they have to do anything for us, and we're totally dependent upon basically volunteers in our own organization to carry forth these principles because we have no authority. If that doesn't sound like Data Administration or Quality Assurance I don't know what does. So I have found that particular book is a gold mine for coming up with principles for how to influence people to do the things you're looking for them to do, even though you have absolutely no authority over them. It's better than any other management book I've ever found for managing the Data Administration kind of function, the staff function in the organization. Peter Drucker's Managing The Non-Profit Organization. And he talks a lot in there about this model, about how to develop a mission statement that's effective.

From mission we move on to the policies of the organization. I should be able to translate the mission into a set of policies. Policies represent the guiding principles that we put forward to our organization that will help drive what we do. You ought to have certain policies. A quality policy. How many of you have a quality policy up on the wall? Many organizations do. Same with their mission statement. They say something to the effect that We will produce the highest quality systems for our customers. We'll work with our suppliers. They're vague. But they start to imply a philosophy for operating: How do we want our people to behave in the workplace.

You should have a Data Administration policy. It says we'll treat data as a corporate asset, and manage it accordingly. Maximize its use to the business and protect its integrity. You should have all kinds of policies. You should have a productivity policy. We'll be the least cost producer of products and services for our customers. Have those statements of principle. In such a way that the relationship between policy and mission is a belief that if we adhere to the policy we dramatically increase our chances of fulfilling our mission. That's the tie. That's the belief system that has to be sold to management and by management. That says one of the reasons we have policies in Data Administration, one of the reasons we want to treat data as a corporate asset is because that's one of the key factors we've identified for helping the business take advantage of information technology.

That's where we start to see the tie now to what we're trying to do. You've got certain policies that impact your organization. It's usually the quality policy, the Data Administration policy, and the productivity policy. We'll be the least cost producer, we'll treat data as an asset, and we're going to have high customer satisfaction and high quality in our products and services. And then bring these policies into your area of specialty. It's up to the staff function and the line functions in the organizations to translate policies into actions. The policy becomes your justification for existence. Information Engineering becomes the program you're selling. You believe, with your area of expertise, you've benchmarked other organizations through this user group; that Information Engineering as a collection of disciplines is an effective means of supporting those policies. If you want to treat data as an asset then understand data better. If you want to have high quality systems, involve your customer every step of the way in planning your actions. If you want to be the least cost producer develop a structured set of processes that in essence Information Engineering is the answer to the problem posed by you mission statement. It's your contribution to the whole. By itself it's not enough to fulfill the mission, but it's your contribution. That's how you should paint it. If you want to go to management to get permission to do something, you can tie what your doing to the mission of the organization and show that the organization is unlikely to achieve its mission if they say no, they're going to have to say yes.

You're here to help support the polices. Because otherwise you're trying to sell against fear. You're trying to roll out programs because you think its a good idea, the people are resisting it, and why won't people do what they want me to do? Why won't people make use of my techniques? Why aren't people using the tools? I've given them the training, why won't they do it? They won't do it because you haven't created the tie between what you're asking them to do and the mission of the organization. If you can do that and people still won't do it, now you've got a discipline problem. Then you've got people in your organization who will knowingly say I don't care about fulfilling the mission. And as a Data Administrator, as Information Engineering specialist, we can't fix that. That's a management problem. But we're not at that point in the industry today. We haven't effectively tied Information Engineering to the organizational mission. So when people say I don't want to do that, the best we can do is whine. We're becoming a sub-industry of whiners. We have to break that pattern and tie what we do to the organizational mission. We have to believe in what we're doing, and sell it against the polices and missions of the organization.

Once we've done that we can kick into the third level of this hierarchy... and that's standards. You see a policy is vague. Management puts forth policy. Therefore I hope that they stay vague. I don't want management micro managing the organization. But there are certain policies out there that we're trying to support. Our job is to bring our expertise to bear and create standards. If you live by the standard you will support the policy. It's hard to go to work in the morning and treat data as a corporate asset. You can't DO a policy. The challenge is to have standards in place so that what you're doing supports the policy.

One way to support the policy is that every project will support at least one subject area data model. Every project will develop process models. Start to develop standards of behavior. What will people do such that if they do them they are likely to be supporting the policy. So again, that's the selling bridge. If you're going to try to sell standards in your organization - we want you to do certain things. If you don't name your entities consistently you are unlikely to support the data administration policy. If you don't create your models to be reusable by other people, we're unlikely to support the policy. If you work in a vacuum and don't review your models with other projects, we're unlikely to support the policy.

So it's no longer Data Administration, or whatever you call your group, trying to tell other people what to do, and having them say I don't want to do that - who are you to tell me what to do? You're simply pointing out to them that there are certain behaviors that they should be practicing if in fact they want to support the policies. And supporting the policy is how we've chosen to support the mission. So, in essence, you want to promulgate standards in such a way that the only way someone can disagree with your standard is if they already disagree with the mission. Now like I've said, if that's the case that's a management problem; that's a discipline problem. You're in the wrong organization if you don't support its mission.

That's what you're trying to do.. Off-load the disagreement outside of your function. Which puts the onus on us as we try to deploy new processes, as we're trying to implement new standards it's up to us to make sure that every standard we ask for is going to support the mission. There are a lot of standards that are just plain wrong. We test standards on a number of criteria, and one of the primary tests is on criticality. Make sure that NOT adhering to the standards would be a bad thing. If you don't develop any kind of data model, then it is very unlikely that you'll be able to support the business. That has to be a true statement for data modeling to be a standard to your organization. If not, it's a guideline, a recommendation, call it what you want. But it rolls down into the next level of the pyramid.

Because standards have to be supported by procedures. Procedures are HOW we want people to work such that if you follow the procedure, the product you create will conform to the standard. One problem in most standards programs is that we simply promulgate rules. You must have this... you must do this... you must do that. And we never tell people how. The most common reason that people don't comply with standards, and it isn't that they disagree with them, is that they don't know how to comply.

I can promulgate a standard in my organization and say no program will have a McCabe cyclomatic complexity of greater than fourteen. It's a great standard. It's real important stuff. But how the heck do you write a program with that kind of complexity metric? I have to understand what the metric is. I have to understand what it is in my job that contributes to it. And I have to know that it's reasonable. Not just critical, but attainable. If I look across my organization and find that the average program has a cyclomatic complexity of thirty, then passing a standard that says it's going to be fourteen isn't achievable. So even if you can tell me how I'm not going to follow the standard because its not possible. It's not doable. It's not attainable. But if I find that the standard is reasonable, if I find that there are lots of people writing programs with that complexity, then the next question is: How would I? I need a procedure for programming. Some rules which can be a long written procedure, I've seen people write books with procedures for how to write code, but at QAI we recommend that you reduce that procedure to a checklist.

Certain things that we're hoping people will do or not do, that we bulletize. You don't want them reading a lot of words. So give me a checklist. Do all paragraphs have an exit? Hopefully that's a yes because saying no increases cyclomatic complexity. Is the deepest level of IF nesting four levels before I go to a PERFORM? Have I avoided the AFTER... VARYING clause on a PERFORM? Do I use in stream PERFORM code rather than paragraph code if the code is only used from one place in the program? I go out and discover. I find, through my Quality Assurance program, the find best practices. And I add those best practices to the checklist. And I give out the checklist as my procedure. It says develop code that adheres to these standards. That's what I'm after. I'm after a combination of standards and procedures so that the people can in fact do the job.

In a good standards program, enforcement isn't the issue. I know a standards program is in trouble in any organization as soon as they start talking about enforcement. Because people follow standards if they have the tools and techniques necessary to follow them and if they believe that standards support the mission of the organization. Those are the key factors. If either of those is missing, it isn't a question of enforcing the standard, it's a question of fixing the standard. The standards broken. It's the standard that's wrong, not the person. Make sure that every standard you try to promulgate is relevant to the organization, relevant to its mission, and it in fact has given people the skills necessary to follow the standard. We're talking here in terms of Information Engineering tying to the standard. But the result of the standards program doesn't have to be committees of work, and reams of paper, and books and books full of procedures for doing Information Engineering. Most of the key elements can be reduced to some very basic checklists. The challenge is to identify for your organization the kinds of things that go wrong the most often. I could develop a whole checklist on how to validate a data store which is useless if the organization hasn't matured to the point of developing data flow diagrams. So you've got to decide for your own environment what are some of the tools that we can put in place.

The two models go together. I can take the methods, disciple, technique, and tool model and put it side by side to this one and recognize that the process model reduces to the standards and procedures model. So I may have standards that represent my methodology. You will always do a requirements phase and get signoff from the customer. You will always do an analysis phase. You will always develop data requirements within the methodology. You'll tier your standards. If you're not asking people yet to even understand data requirements there's no standard that says you will understand data requirements during the requirements phase, then it's too early to demand that they always develop and entity model. So don't jump too low in the M-D-T-T framework. Start to standardize from the top. Get them to standardize on WHAT they do before you worry too much about HOW they do it. So each of you from different organizations, and I looked at the sign-up sheet and saw that there's 10 or 12 organizations here, each of your organizations are at a different level in your Information Engineering maturity. So the worksheets, the processes, the standards that you need today, are different from other organizations in the room. The advantage you've got in being in this kind of user group is that you have an opportunity to share your results and help move each other up that scale.

Now what I've given you are just some samples. Let me just walk you through these. I just wanted you to have something to think about. They're not necessarily things you can take and use right away. But I think they might provide you with some guidance with how to tackle some of your requirements. So let's look through some of this.

The first page inside your handout is a requirements defect classification list. One of the things that we see at the institute all the time is that in Information Engineering shops we find that they concentrate a great deal upon the analysis phase, data and process models, decomposition, things of that nature. Or they move too quickly to structure charting, relational translation in their design tool, only to find that most of their problems, particularly in case tool use, exists back in the requirements phase. This particular sheet I developed at a site where we were using KnowledgeWare ADW/PWS to develop a requirements document for the site, and we were using objects like information needs, problems, goals, organizational units. I don't know how many of you know the ADW product, but they're all very similar actually. So we were doing a requirements definition. Trying to understand the organization and its problems and things of that nature.

What we published was a document out of the ADW tool that qualified and described about 700 requirements, distinct statement of requirements in the information needs, problems, and goals category... with definitions and cross-referencing to organizations. Really using the tool the way the tool was mechanized to be used. The question at that point became, great, we've got this great tool, and without it we couldn't have produced this pile of paper, but is the pile of paper any good? Let's go through and find what are the most common defects in a requirements statement coming out of the KnowledgeWare case tool? And what we found were eight common defects. The eight most common defects that we found on that project were found, and in that site we developed this little checklist and guide and published it to everyone else doing requirements analysis. It was based on only one project, it was a big project though. But the other projects ate it up. They said, this is wonderful. If I had this list before I did my requirements I wouldn't even have most of these defects because I would start to think about these things. And that's the whole point of Quality Control. The point isn't to catch them after the fact and say gotcha, but to give people the tools they need to validate their own work and hopefully avoid most of the defects.

So I put that list into this packet because regardless of the project that you are on, if you're using a case tool to do requirements definition, I'd be willing to bet that these eight defects will be your most common. The question becomes, which types are most common? For me to be able to say: How do we fix our requirements process? I've got to know which of these eight is most common in your site. That's the basis of Quality Assurance that if I were to review a project with a thousand requirements and the most common is "possibly redundant objects", number five, then I can go meet with the team and talk about how to avoid that kind of problem. I can make a process improvement by recognizing a common problem. But giving that team that talk when their actual most common problem was "incomplete associations", number seven, would be the wrong talk. So these are a shell for understanding common defects.

The page after is a sheet for trying to collect such data. Pick a pilot project in your organization, someone that is just coming out of requirements and suggest to them that you conduct a review using this as a data collection sheet. Let's go through and review your requirements document and record every defect we can find and try to categorize them into these categories. You may find that you need to change the list of categories, you may have things in your environment that wouldn't have been in the site that came up with this sheet. But let's start counting and tracking defects in requirements. The results will give us a tool to start understanding what's going wrong in the process, and to be able to begin exploring new ways to use the tools, or perhaps a need to bring in supplemental training.

The tool is just a mechanism. You're looking for a way to assure that what's being produced out of it is correct. And the deliverable that came out of this report looked great. But it turned out that the defect rate was over 60%. 60% of the 700+ requirements were defective in one or more of these eight categories. So it took two months to produce the report, it took three weeks to fix it. Had we had this checklist available, had we built some training around these common defects before we produced the report, we could have had a higher quality product to begin with and not wasted three weeks on rework.

Defects are opportunities for improvement, they're not things to beat people up over. Unfortunately, in Quality Control that's our major obstacle. People are afraid of having someone do Quality Control on their products. So the trick is to do education, in part of a broader quality program you can educate people that the reasons your looking at them is to identify opportunities to improve. But it's a quality bias you have to overcome. That's a potential requirements tool that categorizes defects.

The next two pages represent, from a different organization, but its the same basic idea, a questionnaire designed to review data models. If you're really practicing Information Engineering, and again this particular sheet was based on the Information Engineering Workbench, KnowledgeWare's DOS tool, to say: What kinds of things can be wrong with a data model in the tool? Well let's come up with a list of questions we would ask when reviewing an object. And the way this checklist was begun, it was used as a probing tool. People would come to us with data models and we would use the checklist to review it and give them back a list of all the defects. What happened over a period of a couple of months was that, well, people got smart. They said: Hey why don't you give me that checklist before I do all of the models and none of this stuff will go wrong.

So what happens when you roll out Quality Control is that initially as you test your tool it looks like your defect rate goes through the roof. It's not that defects are actually going up its that you are now identifying them. You see they were there before. But what quickly happens is that is that people start grabbing the checklist and saying: Well if you're going to use this checklist to measure me when I'm done, I'm going to read it before I start.

Quality Control is often perceived as a reactive tool, but it can actually be a very proactive tool. If as I am modeling I am aware of all these questions, then I'm going to model better. That's the basis for a checklist. It's that most of the defects that we're talking about here are omissions. Things people just didn't do in the rush to get the models done. Now you might agree or disagree with any particular question, some of them are pretty stringent. In the use of the checklist, the checklist is presumed to say you have finished analysis and you're telling me that you're ready to do relational translation and move into design. These are the standards we hold you to.

If you apply the same questionnaire earlier in analysis there's a lot a facts you wouldn't have. In fact there's a lot of things hat this checklist asks for that are not properties that the Information Engineering Workbench tool can package. If you look on the second page of "relationship" it says "Is the product of the source entity volume .... identifier?" First of all, I've got to train people in what that question means. If you want to use this kind of checklist you would have to include some kind of resource or training tool to go with it. See, what that question is asking is if I have an associative relationship between two fundamentals, do the average cardinalities match?

If I've got ten products each on an average of two orders, and five orders each with an average of three products, do those products match? If they don't then something's wrong with the data model. I don't have to build the system and test it to find that out. The analysis tells me it's wrong. So we want to validate those things in analysis. That's a fairly mature question to ask. If you've got an organization that's asking: What's the difference between a fundamental and attributive entity? then you're probably not ready for this question. You've got to tailor the questions to the maturity of your organization. Some of them are fairly quick. Attributive entities: Is the unique identifier one attribute in addition to a relationship? Associative entity: Is the unique identifier include only relationships. Fundamental entities: Does the unique identifier include only attributes? Those are Quality Control tests that you can use fairly early in the maturity to make sure that people are modeling properly. Because the tool itself doesn't enforce those rules. You've got to be prepared to.

The process model questionnaire is on the following page. It's a one page sheet that's used on the same project to measure compliance to what we want process models to look like. And again you, in your shop, might disagree with some of these standards, the point is that you've got to tailor it to your own requirements. But by writing down what you're looking for when you review a model, you're telling people in advance how you want them to work. And people love this kind of stuff. I've never seen a site yet where this kind of stuff wasn't well received. And you can give it to them before they develop the model and that alleviates some of the fear. It's painful to always be modeling knowing that they've got some kind of checklist over there that they're going to use to check my results. But I don't know what it is. That's where the fear comes from. By publishing your checklist in advance the fear goes away. What it comes down to is that only an idiot tries to go into production not adhering to these standards. It's like if someone publishes a test for a class in advance, the day before the test I hand out the questions and I say study tonight and come take the test tomorrow, only the idiots are going to fail. They will. There will be a couple. That much I've learned in my teaching. There are some students who will walk in and not know any answers. But again, that's a different kind of problem.

It ought to take some of the pressure off of us as data administrators to be the expert. Because we're the ones in the organization who know how to do this stuff and everyone else is struggling to do it what we find is that our time gets taken up answering questions and trying to give support. To the extent that we try to back away from that, people complain that there's no support and they're not going to use the technique. So to fix that we try to dive in. But what we find is that we spend all day with tactical problems and never move forward strategically. By publishing the expectations you'll find you take a lot of pressure of your own organization. That's the process model sheet.

The defect recording worksheet on the next page is an example of a sheet that can be used to record defects. You need such a sheet. If you give out a questionnaire to people and say identify your defects the second half of the formula is send me a report of your defects. And again, there's going to be some cultural resistance here. You've got to teach people the reason isn't so that I can keep track of all the mistakes you made. But my job is to identify trends. To see what is the most common defect in the models we're creating? Because management only gives me so much time and resource for education, for consultants, and for support, I have to figure out how to spend it.

What's the most common problem in a data model today? In organizations that are fairly mature those problems tend to be related to cardinalities and properties of the relationship. In organizations that are less mature it tends to be related to naming. So which training do I bring in? Do I bring in a specialist to teach you how to optimize your cardinalities and do third and fourth normal form, or do I bring in some basic training in how to model so that you'll stop creating a new entity every time you think you have a new one and start looking for aliases and doing some data administration.

These two kinds of training programs are completely different. And I may not have the resources to do both of them. So I have to know the defects in the environment to know which one to bring in. I have to have a way to figure out what's the most common defect in my environment. This is just one example of how to catch that. Is the defect in my process or in my product? Is it an omission, a wrong, or an extra? Is it major or minor? And what area is it in? Did I misinterpret a business rule? Was my project scope wrong? Do I have an invalid work assignment, or project assignment? What is it that's breaking in my project? And what was the cause? Do I not have enough training? Do I not have enough time? Are my personnel inexperienced? What kind of problems am I going to try to address?

You might see that there are trends in defects that are very obvious within a project but aren't across the board. Only this project is using entities like they're going out of style. So I'm going to bring in some specialized support for that group rather than an across the board training program. So it's a way to target opportunities by capturing my defects. And it gives me data now to go back to management, to say that the investment that we're making in checklists in fact resulting in a declining rate of defects. We're seeing fewer defects per person per week and the average severity of those defects is in fact going down. That's the progress we want to show. If you're nervous about whether or not that will actually be the result or not, rethink some of your programs. If you have a high confidence in Information Engineering and a high confidence in Quality Control, you can take it as a given that those defect rates are going to go down over time.

And then the last example is kind of a generic sheet I've just tossed in because it's the one that's not Information Engineering related. Meeting attendee satisfaction survey. One of the sites that I'm in we've passed a standard where this sheet has to be filled out at the end of every meeting by every attendee. If you'll look through the questions you'll see that if it wasn't a good meeting you're not going to score very well. What we have found in that site is that the number of meetings has gone down by 40% and the average duration of meetings has been more than cut in half. People are meeting less and for less time because they suddenly have a criteria to determine whether the meeting they are holding was successful or not.

People are starting to challenge, and quality is going up dramatically. What tends to happen is that there is usually a trend. The person who runs the meeting collects them and looks through them. There is usually a trend. You could have done a better job on the agenda. We shouldn't have wandered so far from the agenda. We should have given people more advanced notice. And just using this kind of checklist, we do in fact collect and report the results, we haven't tried to use the results yet for a comprehensive program of improving meetings. But what we're finding is that people in general are improving the quality of their meetings.

So again, it's by publishing the criteria in advance. If you know that you're going to be scheduling a meeting, and you know that this is the criteria for measurement, you're probably going to publish an agenda. So if you've never done it before, only an idiot is not going to publish one knowing these are the criteria we use for measurement. So you've got a natural and immediate improvement by rolling out these kinds of checklists.

At QAI we recommend, strategically, a comprehensive program to improve the processes in use in your organization. That's the Quality Assurance role and we promote that very heavily. And I don't mean to play that down because I've chosen to talk about Quality Control today. But what we find is that by introducing some Quality Control tools, you can dramatically improve the short-term productivity and quality of your organization and reduce your training budget. Think of the training it takes to teach people the techniques implied by some of these worksheets. And you've got many people in your organization that have the intuitive skill to do it once they have the checklist. They don't really want the training. They say, oh that's what you want. Particularly if you use a lot of consultants. If you've got a lot of outside contractors in your organization, they tend to come in with an adequate skill base they just have no idea how you operate. These kinds of tools can be very effective.

We strongly urge you to consider categorizing what are some of the key areas that we're looking for short-term success in Information Engineering? And then say: What are the attributes we would look for in the products being used there? And let's build some kind of checklist, or reminder capability if you will, to get people to try to do those things in a more effective way. And that's where we view the role of short-term improvement Quality Control tasks within a deployment of Information Engineering. Thank you.