The Co-Intelligence Institute CII home // Y2K home // CIPolitics home

What Happened to Y2K? Koskinen Speaks Out



John Koskinen, White House Y2K Czar's summary of Y2K on 01/27/2000

US Department of State
[International Information Programs]

[Washington File]

27 January 2000

Transcript: What Happened to Y2K? Koskinen Speaks Out

(Administration Y2K coordinator assesses global remediation) (4,650)

The costly effort undertaken in the past two years to deal with the
Year 2000 computer problem prevented massive disruptions in systems
and services during the date rollover into the new millennium,
according to White House Y2K coordinator John Koskinen.

Koskinen, Chair of the President's Council on Year 2000 Conversion,
said in a January 18 interview in Washington that the relatively
problem-free date change that occurred is an indication not that the
Y2K problem was not serious, but that the work devoted to fixing
thousands of computer systems worldwide was successful.

Koskinen said the absence of serious Y2K disruptions in developing
countries, where remediation efforts had lagged behind those in
industrial countries, is explained by the less intense reliance in
those countries on digital technology, and by the fact that they were
able to apply the lessons learned from dealing with the problem

Koskinen spoke with the Office of International Information Program's
Paul Malamud about the smooth transition into the year 2000, and the
work that made it possible.

Following is a transcript of the interview. In the transcript,
"billion" equals 1,000 million.

(begin transcript)

Q: January 1 has come and gone, and reports show that there were fewer
disruptions of computer operations and infrastructure, on a global
basis, than some had feared. In retrospect, do you feel the advance
publicity and the large amount of money that went into fixing computer
systems worldwide was overblown? Could this have been handled by
smaller "fixes" performed on an ad-hoc basis after January 1?

A: I think a lot of people did do it in an ad hoc way, at the end, and
seem to have gotten through it well. However, for organizations using
large information technology structures there was no way they could do
it at the last minute.

The major banks around the world worked on this for several years
together, because you are talking about organizations that have
millions of lines of software in code that had to be fixed. In fact,
one of the reasons that people thought the world, as a whole, was
going to have difficulty was that it takes so long to work through
those big systems.

You have to distinguish governmental organizations and private-sector
companies that had major software problems from organizations that had
more straightforward information technology challenges. I think what
happened was that some smaller organizations and governments have less
reliance on complicated systems, and therefore, a lot of their systems
either were not significantly affected by Y2K or they could take care
of those in a relatively short period of time for relatively little

When people started working on Y2K no one knew exactly the full impact
of potential failures involving large networks of computers. In
addition, no one knew where in power plants, telephones systems,
chemical plants, date-sensitive "embedded processors" might have a Y2K
problem or not. My favorite example is elevators. Two or three years
ago, the assumption was that elevators were at risk. There was concern
that some elevators -- if they were dependent on date-sensitive
computer chips -- might malfunction. But after about a year of
testing, it turned out elevators did not have a problem. This meant
that if you were a country or company that started your Y2K
remediation efforts late in the game, you learned from the experience
of others that you didn't need to be very concerned about elevators.
And the same in chemical plants. It turned out there are only
relatively a small number of critical systems in a chemical plant.

The U.S. Chemical Manufacturer's Association and the Environmental
Protection Agency issued a brochure in the middle of 1999 that said
"These are the systems that are at risk. If you are using these, this
is how to fix them; if you are not using these, you are probably in
pretty good shape." So what happened was that as a result of a lot of
good work, the countries and organizations that started later had the
benefit of all that background and that research and information which
was fairly freely exchanged; so that as they moved into late 1999,
they could actually focus on things greatly at risk.

But then, turning it around, if everybody had waited until early 1999,
I think the people who run the major banks around the world and
similar large institutions would tell you the Y2K fix would never have
gotten done. In the case of the federal government, for instance, we
started in 1995 in a coordinated way -- some U.S. government agencies
began their Y2K remediation efforts even before that -- and people
were working into the middle of 1999; four years later still working
on their systems as fast as they could. So the reason a lot of serious
computer programmers thought the world would never make it was because
of the magnitude of the challenge.

Now could there have been less hype around the edges of the issue with
some people saying the world was going to come to an end because of
Y2K? We had a lot of difficulty over the last year and a half
convincing people that progress was being made. The federal government
prediction was that, in fact, there would be no major failures here or
around the world, failures impacting entire nations. We also felt
there would only be scattered outages in the United States; but that
was seen as a minority view by some.

So there was a certain amount of press coverage and hype about whether
or not the problem could be solved that probably we could have done
without. Fortunately, however, the public did not overreact, which was
our concern. And to the extent that publicity about the Y2K issue got
more people in the last six to nine months to really focus on the
problem, I think it probably helped us come to a very successful
conclusion. I don't think there is anyone who worked anywhere around
the world on the problem who thinks that it was not a major problem.
There is no bank I know, there's no power company I know, there's no
telephone company I know -- I talked to a lot of them -- who feel that
they wasted their time or their money, or if they had spent just fifty
percent less they could have done just as well. I think all of them
looking back on it are very pleased that they got through without any

Q: It may be true that the time and financial resources spent
reprogramming computer systems were well worth the sacrifice. However,
there was also concern about "embedded chips" -- that is those
computer chips that direct the operations of machines and consumer
appliances. There was an assumption they might be date-sensitive and
malfunction on January 1, 2000. Yet, there have not been many reports
of problems. Why not?

A: Well, what happened fortunately is most embedded chips turned out
not to be date sensitive. There are 30-50 billion out there. When I
started this job a couple of years ago, I fondly referred to them as
the growth industry of the problem, because people had begun to worry
about them, yet there was no way you could get anybody to tell you the
answer. I met with manufacturers of various parts of the chips, the
chip manufacturers, the people that put them together, power
companies, telephone companies -- nobody knew the extent of the
potential problem.

The upshot was that (a) a lot of work had to be done investigating
embedded chips, and (b) a lot of people became concerned that this
would be a major issue. The advantage of the issue, however, was it
got people to look beyond pure information processing. Everybody knew
that banks, insurance companies, financial institutions, payroll
systems were date sensitive, because they calculated how old you were,
how long you had been working, what day of the year it was. People had
not spent as much time taking a look at what went on in other kinds of
operations: oil refineries, power companies, power plants, etc.

Fortunately for the world -- and I think one of the reasons you did
not see major infrastructure failures -- is the chips themselves
generally turned out not to care what date it was. But what we did do,
because of the focus on embedded chips, was look at control systems,
which are basically software or computers that run operations. So if
you go onto a plant floor, you go onto a ship, you go into an oil
refinery, what you see increasingly is people sitting at computers
running the place. They are getting information from all those
embedded chips and it's all coming into then a computerized process.

So the reason, for instance, that airports had a problem with runway
lights was not because the lights themselves had embedded chips in
them that had to care about what date it was, but the chips in the
lights fed into a control system that set the cycling for the lights,
and that control system cared what date it was. So the bottom line
was, embedded chips turned out to be much less of an issue than people
worried about: once you could find the control panels, you needed
simply to update or check those. And, of course, these issues only are
relevant when sophisticated control systems are in use.

As we became familiar with the issue, we began to appreciate the
extent to which technological development varies throughout the world.
A lot of operations crucial to the functioning of industrial
infrastructure turned out to rely on manual or analog, rather than
digital, controls. It turned out a lot of the power companies and
telephone processes around the world were, in fact, not affected by
the embedded chip problem, which is why those countries had to spend
less and also why they had less difficulty.

But even in the United States and England and places where they have
very complicated systems, because they paid attention to them early
on, they were able to replace the switches, replace the control
systems wherever they needed to, to make sure they could continue to
run them. I think we got lucky in the sense that it turned out the
potential for the chip itself to stop the operation was relatively
minor. The risk turned out to be again back in the software control
processes, but it was important to find those to make sure that smart
building systems, card access systems, plant control systems in those
computers were checked. Because up until that time people were only
looking at their financial management systems.

Q: Some press reports estimate $200 billion was spent worldwide on
preparing for Y2K. Do you believe that is an accurate figure?

A: I think that's liable to be a more accurate estimate than the $600
billion number you see. This problem has been unique. It has been
global. The early estimates were that $300-600 billion would have to
be spent. That range itself gives you an idea that those are pretty
much guesses.

We are very confident we know how much the federal government spent,
which was $8.5 billion. The Commerce Department last fall did an
analysis of all the available reports of actual expenditures, and
estimated that in the United States the federal government and others
spent about $100 billion to remedy the Y2K problem. We estimate that
that's probably close to half of what the world spent, so that's where
the $200 billion comes from. That's the lowest number you'll hear.
Everybody's still talking about $3- 4- 500 billion. I think those
numbers do not correspond to reality. But even if it is only $200
billion, that's a lot of money.

Q: Did the Y2K remediation process turn out to be a financial bonanza
for computer engineers, consulting firms, etc., who were called in?
Some have suggested they may have had a stake in emphasizing the
seriousness of the problem.

A: No, I think actually if you look at it, at least in the United
States, a lot of corporations and certain federal agencies did the
work themselves, with their own staffs. There clearly were consultants
and people willing to work on the outside, and one of the concerns
when I started this job was there wouldn't be enough programmers
available anywhere to be able to deal with the problem. The shortage
of programmers never turned up. This was, in part, because people got
better at figuring out how to fix these systems with windowing
techniques and other technical fixes and partially because as work got
done, people doing that work were freed up to work on other systems.
Although it's hard to pin down the statistics, I think a significant
amount of the work was done internally, in many places.

A significant amount of the money spent to remedy the problem went for
upgraded equipment. Some people say that this was all a plot for all
the information technology companies to sell more stuff. The truth is
more subtle. Many of the companies that produce information technology
over time provided free computer software "patches" designed to thwart
the Y2K bug, or other kinds of free upgrades or information. When
questioned about Y2K, the answer from these companies wasn't
necessarily "Buy a new one of our things." The answer was in three
categories: either "It's okay," or "It's okay with a fix that we'll
provide to you -- either sell it to you or give it to you," or "It's
too old and we are not servicing it anymore and it doesn't work and
you have to get a new one."

I think what happened with a lot of companies, and where a lot of the
money was spent, was they looked at old legacy systems and decided
that since they were going to replace those systems sometime in the
next two to three years anyway, they might just as well replace them
now, rather then fiddle around and try to figure out how to fix them.

I think part of the reason people are talking about a productivity
gain in the global economy in recent years is that, prompted by fears
about Y2K, a substantial amount of the money went for consolidating
and getting rid of old legacy systems and developing and buying new,
more productive and more efficient systems. Around the edges, I am
sure there were some consultants trying to sell people a lot of fancy
new things for no particular good reason. But I think that is a very
minor part of the process. The $100 billion in the United States was
spent by thousands of different organizations, each one making its own
judgments. The major Fortune 500 companies in the United States are
not naive. They are not run by people who are bamboozled by sales
people, either internally or externally. I think they ultimately are
people who spend their money carefully.

If you look at their information technology budgets, most of them went
up over the last two or three years. They went up not because somebody
was doing a good sales job. They went up because people were
discovering how difficult it was to solve this problem. The federal
government was the same way. We started with a Y2K budget under $3
billion and the number kept getting larger because it took more and
more time, people discovered, to actually fix the problem. And so the
indication of the magnitude of the problem is that in most cases
people found it took longer and it cost more and was more complicated
than they estimated. And these are people who are experts. They aren't
naive managers employing 25 people. These are large organizations with
their own in-house staff and very sophisticated managers who
discovered that, in fact, in many cases it took hundreds of millions
of dollars to solve the problem.

Q: Don't mainframe computer systems tend to get replaced anyway, due
to rapid advances in technology and speed?

A: Yes. I think for those people that was their judgment. In many
cases they did not realize how old and inefficient their legacy
systems were or how many they had; when they looked at it, they said
"Why don't we just get rid of all this stuff?" In fact, our view five
years ago in the federal government was that this would be a great
time to inventory our own systems and get rid of the ones that were
inefficient or complicated to run or always breaking down, and to
procure more modern, standardized off-the-shelf equipment. I think you
can find that in 20-25 percent of the cases in the federal government
that's what happened.

Q: Looking at developing nations, what was the extent of the problem
there, as it finally manifested itself?

A: It is always difficult to know what is going on in other nations.
What we do know is that when we assembled and invited the Y2K
coordinators from around the world to meet with us in December of 1998
at the United Nations, we had about 120 countries there, and probably
half of them weren't sure exactly what this problem meant. But they
all agreed to work together and share information on a regional basis
and on all the continents around world. When we had them back to the
U.N. in June 1999, we had a 173 countries represented -- the largest
meeting in history of the United Nations. And it was clear that all
173 of those delegates knew that this was a problem of some degree in
their country that they needed to deal with. Our advice to them, as to
smaller businesses in the United States, was not that they go buy
everything new. We advised them that some things would be just fine,
but that they should take advantage of the information available,
assess each situation, find out what's actually at risk, and deal with

Increasingly, it became clear that most developing nations didn't have
much digital information technology: their power systems, their
telephone systems, a lot of their systems were analog. They were
automated, but their analog devices had gauges instead of digital
readouts and, therefore, they didn't really have any major risks. Our
concerns, I think theirs, were primarily wherever they had gone into
the digital area, particularly in financial transactions. You can take
your credit card around the world and get cash almost everywhere these
days. All of that depends upon financial and telecommunications
systems that are interconnected between nations and continents. These
were what were most at risk, it turned out. But what was going on at
the same time was the central bankers of the world, out of Basel, were
working with all central banks in the world and all market regulators
to share information and to try to make sure there wouldn't be serious
problems come January 1, 2000, with the international flow of
financial transactions.

I think because of the kind of international effort and the fact
individual nations paid attention to the issue where they needed to,
we've only seen a few glitches -- some, but just a handful of glitches
in financial systems or similar telecommunications networks.

Q: Suppose no attention had been paid to the problem and no efforts
made to fix the Y2K bug in advance of January 1. What would have

A: It was clear two years ago to me after talking with a lot of
experts, if nobody did anything else beyond what they had already done
up until two years ago, that the world as we knew it would end. The
New York Stock Exchange would not have been able to open on Jan 3, the
financial markets would have closed, the banks would have had very
great difficulty calculating accurately the money they were owed, or
the money they owed to others. Payroll systems and other basic
complicated financial systems in the U.S. would not have functioned.
And over time we would have had a clear degradation in
telecommunications and some power systems. I think that we wouldn't
have had to wait very long, if we had done nothing. As systems started
to operate, they would have stopped. In fact, in spite of our largely
successful remediation efforts, I have seen a list of about 90
glitches and failures around the world due to Y2K problems. This list
is an indication where we were headed if we didn't do anything.

My disagreement with the doomsayers was the view that we could never
fix it. Some believed that it was such a complicated problem and it
infected everything potentially and that we'd never get enough
cooperation, enough work done together, enough information sharing, to
be able to get it done in time.

My view was that if we mobilized all possible resources, we could, in
fact, make a significant impact on minimizing the risks. If you talk
to major financial institutions in this country, major banks, major
telephone companies, they will all tell you that they are delighted
and breathing a great sigh of relief that their systems are running
today. They are confident that they wouldn't have run if they hadn't
done all this work in advance. In the State of California, Los Angeles
County, an enormous jurisdiction, estimates that about 60 percent of
their intelligent systems would have stopped. They'd looked at,
literally, thousands of systems -- they went through them all -- and
the vast majority of them had problems that if they hadn't corrected
them would have stopped them cold -- they would not have been able to
pay benefits to local people, they would not have been able to pay
their payroll.

So the irony is that because people worked at it in such a consistent
way, and there was effective information-sharing, and because people
got better at it as we went through it, people are now questioning
whether it was a big problem in the first place. Historically, in
information technology the world hasn't done well with big problems.
Major projects usually cost too much. They take a long time to get
done, and they usually don't work well, which is why a lot of the
doomsayers were information-technology programmers. They weren't
people off the street -- they were people who looked like they should
know. Some of them said it would be impossible. So one of the great
ironies is, the world having pulled together to meet this challenge
and deal with a major information technology problem, having done it
not a hundred percent perfectly, but pretty well, close to
ninety-eight percent perfectly, we now confront the other side of the
coin -- "Could you have spent less"? Oh, that's a good question to
pursue, but when you're running one of those companies, if you had a
major failure in the first week of January, in the year 2000, the
acceptable answer wouldn't be "I didn't quite get it done," but "Look
how much money I saved by not fixing it right."

Q: Does the Y2K experience hold any long-term implications for the
global information infrastructure?

A: There are a number of possible implications. Many organizations
worldwide now have a better inventory of their information technology,
and a better understanding about the critical nature of it. In the
future, they'll manage these systems better.

In addition, I think focusing on the Y2K risk will help us with
understanding issues of information security as we go forward.
Information security has not received the attention it deserves, just
as information technology itself in some places has been seen by top
managers as peripheral to the function of an organization: "Well those
are the geeks, those are the techie guys, I don't know what they're
talking about."

I think what happened with Y2K is chief executives, national leaders,
top managers, discovered that you don't need to know about "bits" and
"bytes," the technical language of information technology, to
understand that if it doesn't work you are out of business. People
running organizations understand that the operations of information
technology and the security of information technology go to the core
of their ability to run their systems and run their businesses. So I
think that that will help us as we go forward, insuring that, in fact,
we provide the appropriate protections for those systems in the

And as we've said, I think most people will have better systems when
they get done with it. They will have upgraded; they will have
replaced their legacy systems. Finally, in terms of national and
international cooperation, it's not quite clear where it goes into the
future. Within the United States, you've seen a tremendous amount of
information-sharing and cooperation within industry groups and across
industry groups trying to deal with this problem. In addition, there
are better lines of communication between the private sector and the
government sector in a lot of countries. Then we had this kind of
unique cooperation on an international organizational basis with
national coordinators representing individual nations, and so we have
a list now of 173 national coordinators that we've been sharing
information with back and forth who have been holding regional

There have been at least two regional meetings in every continent of
the world in the last year, sharing information, working together.
What you're most likely to see in the future is that, on a regional
basis, countries that have worked together on information technology
for Y2K are likely to continue to do that. South America is now
talking about how they can continue this kind of informal
information-sharing, to do a better job with electric power, and oil
and gas development now that they see how it all relates for the first
time throughout the continent. We've had some discussion with the
national coordinators at their request. Is there a way to continue
this informal, non-bureaucratic approach to sharing information? It's
not quite clear where that'll go. There are a lot of different
initiatives for improving the use of information technology in the
world and nobody wants to duplicate those efforts. But on the other
hand, one of the unique things about Y2K was it was dealt with
generally very effectively by ad hoc coalitions.

The International Y2K Cooperation Center was funded by the World Bank
with contributions from the United States. It had an affiliation with
the U.N., but it was really a freestanding organization. And the Joint
Year 2000 council, which functioned under the Bank for International
Settlements, with market regulators and insurance regulators as well
as bank regulators, was pulled together as an ad-hoc group. Over 200
major financial institutions in countries around the world cooperated
in way they never had before.

They all had a goal, which was we had to deal with Y2K. So there was a
common enemy that people could deal with. Now that we've dealt with
that, there's a common goal of everyone being more efficient in using
information technology and taking advantage of it. Whether we'll be
able to figure out how to capture that experience and that momentum
going forward into the future is still not clear. Groups won't do well
just meeting for the sake of meeting. I think there is, at a minimum,
a great interest in developed as well as developing countries to find
a way to continue to share information about what's going on with
electronic commerce, what's going on with information security, but
it's still open as to what will come of this.

(end transcript)

(Distributed by the Office of International Information Programs, U.S.
Department of State)