Sun is hosting an interview with Jaron Lanier.
Jaron Lanier is well known for his work on "virtual reality," a term he coined in the 1980s. Renowned as a composer, musician, and artist, he has taught at many university computer science departments around the country, including Yale, Dartmouth, Columbia, and Penn. He recently served as the lead scientist for the National Tele-Immersion Initiative, which is devoted, among other things, to using computers to enable people in different cities to experience the illusion that they are physically together.
Currently, he is working on something he calls phenotropic computing, in which the current model of software as "protocol adherence" is replaced by "pattern recognition" as a way of connecting components of software systems.
His interview delves into why he thinks the way we develop software is all wrong. Why can other industries deal with complexity a lot better than ours? (e.g. people building oil refineries, or commercial aircraft).
He answers the questions:
- What's wrong with the way we create software today?
- Aren't bugs just a limitation of human minds?
- Maybe we need to go back and start all over again?
- How would you do that with pattern recognition software?
- What do you want to say directly to developers?
- What advice do you have for developers just starting out?
View the interview with Jaron Lanier
I like his main idea that components (my words) should communicate via patterns and not strict protocols. However, he needs to be careful about the metaphors.
For example: A small change in nature can make a big difference. A little tiny blood clot, for instance, can kill a person. A small genetic mutation can cause a birth defect. The details of nature are important. Luckily, we didn't have to think them up on our own.
Metaphors help us to understand the unfamiliar in terms of the familiar, but that doesn't mean they are perfect models.
Nature's evolutionary model works great. A small genetic mutation might be catastrophic for a single person, but not the whole system. That sounds great, but do I want my software to work that way? Maybe. Suppose a coding error caused by bank software to transfer money every now and then to Jaron. That wouldn't be catastrophic to either me or Jaron, but I wouldn't really like it.
Anyway, I highly reccomend the article as it has a lot of great ideas. Just think twice as you read into the metaphors.
Just like the Mac... you throw away your disk in the trashcan and then the system ejects it. Trashcan is a good metaphor but this is pushing it.
As a sibebar, in OS X when you drag a disk over the trash can it turns into an eject symbol :)
"Why can other industries deal with complexity a lot better than ours? (e.g. people building oil refineries, or commercial aircraft)."
That is a question/statement I have heard many times before, and I still think it is untrue. Building a software system is not comparable to building a factory or any other artefact for which the base specification has been fixed for decades, and only adapted in detail for the current undertaking.
Since software systems usually mirror the unique details of the day-to-day business of large companies (or the inner workings of a freshly invented product), building them is more comparable to inventing, say, a completely new carburator technology, or some new refinery process. Look how many engineers work on that, and for how long. And I dont see a better degree of automatization there either.
One could say that the software counterpart for the pure assembly-type scenario from other industries is found in the replication of software onto storage media. We have that process pretty well automated, I would say.
Anyway, lets wait and see what the guy comes up with.
Software becomes so complex because enterprise systems have to deal with not only the complexity of the software system, but the complexity of whatever they are trying to model. Like modeling an oil refinery for instance, not only do you have to create(model) the oil refinery, you have to write complex software along with it.
I agree that at present computer science is more art than science. The fundamental point being there are no laws of physics/chemistry. Hence we just cannot tackle the problem like other engineering disciplines.Gimme the laws and i will start to engineer.
To me it looks more like the way lawyers work. Lawyers study cases and precedence. No lawyer builds his arguments from ground zero. The way I see it the goddam lawyers invented open source and hyperlinking long before we did.
If we need to master complexity we need to work like the lawyers do. But then we do not observe do we. I mean we sell databases - but how many software firms do we have that collect data about their own processes and then try to make sense of it like lawyers/marketers etc.
Most software development is neither art nor science. It is simply engineering--adapting tried and true principles to build a solution to the problem at hand. It is a noble pursuit.
I think some developers prefer to think of themselves as "artists" or "scientists" because the names have a better ring to them.
If you are designing a novel and attractive user interface, you can claim to be an artist. If you are pushing the frontiers of artificial intelligence or simulating nature in software, you can call yourself a scientist.
But most of the time, we are engineers. Be proud of that!
The software has three dimensions, which I think most people don't realize.
One, which is obvious is its "code" nature. I would say that this falls under the "science" heading. It's fairly well understood (given its basis is deeply in maths) and researched. It has its theorems and works according to these. You can prove a lot of stuff here, and anything you code is fundamentally based on this.
The second is its "description" nature. That is, we use something to describe a real-world forces/laws. The real-world exists regardless of this, and our description is usually just an approximation. I think most people miss out this one, bundling it with one of the others.
The third stage is the "building". This is fairly obvious as well. You take a real world scenario, figure what laws apply and search a catalog of solutions for these and build the thing. Of course, I'm intentionally simplifying here.
The relation is (somewhat) as maths-physics-engineering. Maths = computer science, engineering = software engineering. We don't have anything at the moment as the physics equivalent. Some of the patterns come close, but in a lot of cases they actually fall into the engineering category.
I'm not sure we can even come with something, because while physics is great at describing physical world, it fails to describe say application of law (more the pity). Software can (and is) be used to describe much wider range of human activities (not all, but still much more).
Many of these are fairly complicated, with a lot of free variables, and not very well understood at all (how many people actually understand their business process?). Some have their own sciences (or should I use "sciences") that try to explain them. And if you look around, where this science actually provides a good and solid foundation (such as physics), software build on them tends to be much better with fewer bugs etc.
So blaming either CS or SE for the failure to deliver is shortsighted. Yes, both of these can be improved and should be, but it solves only part of the problem.
As it can be seen from many other posts before this one, within the core of software development lies many forms of science and art, but above all that there's that quality that makes programming such a beutiful deed.
In spanish it´s called "ingenio" which means among other things: ingenuity, skillfulness, cleverness, inventiveness, ingeniousness, resourcefulness.
Either way you see software I think that no paradigm can truthfully describe it's nature. We have pushed our skills and found solutions for many problems, evolving software to something that's ruled by laws we created and at the same time in a great state of chaos. Just take a look at how many technologies we have to accomplish something, for example, web development or component technologies, though they come to the same end.
It´s all about your imagination, and as someone else said before, we should be proud of it.
P.S. word Engineer means "Ingeniero" in spanish, guess where that came from
While I agree with Jaron's drift, I think that at this time we'd need to address more pressing issues related to typical software products. Interestingly, these urgent issues usually do not relate to large (i.e. over 10 million lines of code) software products.
In a way it's like worrying about the effects of global warming at the same time as worrying about the effects of an imminent possibility of a global nuclear war. In the light of these two possibilities, we must choose the one which carries a higher sense of urgency. Using our common sense reasoning, it's easy to reach the conclusion that if we end up with a global nuclear war on our hands, the global warming problem will immediatelly cease to be of any relevance. Therefore, it would be easy to see the real priorities, given the proper perspective.
In a similar way, the fact that we cannot foresee any significant breakthroughs in the arena of software coding has very little to do with the fact that the majority of software products in use today are next to useless. And I'm talking even about very simple products.
So, before we move on to consider Jaron's pleas, I'd suggest we invest any effort necessary to bring the quality of software we're building today to the acceptable level. How we do it is completely irrelevant, all that matters is the end result.
Comparing development processes for aircrafts and software is just plain dumb. The reason has everything to do with the environment in which the end result is going to operate. An aircraft, while complex, does not operate under the strict logic and rule constraints as pure software does.
If an aircraft was anything like a piece of software, it would crash the minute the air pressure or wind direction changed. Software has NO room for external changes.
So how should software be built? I think the only way forward is to work on dramatically reducing coupling between artifacts and to start thinking more in terms of contracts, orthogonality and service quality.
The worst metaphore of all, however, is comparing software construction with building houses and the like. Most software needs to change frequently, due to new requirements, new technology and new policies. Changing a house is not easy, now is it?
I have always think of software as fast-forward evolution, so designing software systems with that in mind is very helpful. Software has an almost organic quality to it, which is one of the reasons it is so difficult to deal with. In this reality, you need loosely coupled, well specified systems that are amenable to change.
Patterns play a central role in the creation and evolution of software systems, and they build on the basic concepts you mentioned. Their proper use can empower and enlighten us when building and federating software systems. The problem is that patterns in software (and just about every other discipline) are a fairly new concept in practice. The notion of pattern languages and composite pattern languages to base such work on needs quite a bit of research and development.
The gaining popularity of patterns in software is certainly an encouraging trend. Unfortunately, tools that help us harness them and visualize their cause and effect are pretty much non-existent. Given the complexity of software systems, the automation that tools can provide is necessary.
There is a great deal of work to be done in developing languages of patterns, their correct application, and visualization. At this early stage, we can begin to reap some of the benefits of patterns. In the future, they will fundamentally affect the way we create and manage software systems.
Gas turbine engines and space shuttles don't have to take a back seat to software in terms of complexity. I think the biggest advantage these developers have over their s'ware counterparts are the laws of physics.
The reason that builders of airplanes and buildings can get away for the most part with a waterfall development cycle is the fact that designers can predict what the behavior of their designs using simulations based on the laws of physics.
Writing an analogous simulation of a s'ware system is no less complex than developing the real thing.
Certainly things can still go wrong (e.g., the space shuttle explosion, Tacoma Narrows bridge collapse), but the methods of predicting the behavior of physical systems is far better.
Surely this is an excellent example of the difference, to design a gas turbine engine, you do not need to consider all the slightly different arrangements of molecules as they enter the engine, a general theory of gas flows is adequate. With software, one unique arrangement can stop the program working, so you do need an understanding of the possible arrangements.
Probably the most important difference between other industries and the software is that it is always the first time for softwre development - even if it is re-engineering of the old system to address the new technology. Moreover software is incomplete even if it misses some trivial operational issues. It all depends on the developers ability to read between the lines during the discussuion phase and his foresight to draw a line between flexibility and practicality.
Information is of physical nature and scientists are making progress (e.g. Quantum Theory of Information) but still, they're only seeing the tip of the iceberg.
Funny enough, turbulent flow is still one chapter in physics which creates more questions than answers ;-)
Nevertheless, your observation is correct: a molecule flow is radically different from a program. As somebody pointed out in this thread, not all program configurations are equivalent. Any form of statistical investigation on a program behavior will then fail. The "Law of Large Numbers" does not work in this case and unfortunately this is the very fundations of all sciences today.
Hence, there is no scientifical tool to help us investigate such a phenomenon. This was first noted by E. Schroedinger in 1944 in "What is Life" and nowdays by Roger Penrose in "The Large, the Small and the Human Mind".
From this perspective, our hands are tied indeed and there is no wonder that programming, as a science, made little progress since its beginings.
we must not forget that programs live in an environment that is not fault tolerant (the computer), if jaron proposes to rethink computers ,then that is a mighty, almost science fiction-like endeavour.
the observation that sideeffects in real life are small and inconsequencial as opposed to bugs in programs is true because the environment is forgiving of errors. computers are not.
this i think is the root of the software engeneering problem.
interesting read though.
Hi all !
I strongly disagree with the opinion expressed by Jaron and many people here.
First, we are talking about software as an entity, which in my opinion does only exist as an abstract one. The "real" objects that hold this abstraction are transistors, hard drives, etc... What exist are computers, so we should be comparing computers and planes, not software and planes. Or maybe comparing software with the minds (nto the brains)of aircraft pilots ? (smile...)
Microprocessors are fault-tolerant (yes, hundreds of transistors are inoperative, yet the microprocessor will still work as designed, within well defined limits such as clock frequency), hard drives are fault tolerant, operating systems are fault tolerant (a faulty program will not necessarily crash the whole operating system, at least with some of them...).
So why do we keep having problems with sending money accross wires ? Why do we still have this blue screen ? But why do planes, strongly controled by on-board, satellites, or ground computers, do not crash that often ?
Because we simply do not want them to crash, so we put maximum effort (which means a lot of skilled people, backed by strong theoritical knowledge and impressive application development tools and extensive testing) on building these information systems, because we are talking about saving human lives.
On the other hand, whenever i try to send money via my bank web site, after a couple of days i will check wether the money arrived or not. If not, i will simply call my bank account manager, describe him the problem, and this problem will be solved in a couple of hours/days, or they might loose a customer. Why spend a huge number of man-years for "near to perfection" systems that will last only a few years before being replaced, when "technical support" will help providing the required quality of service ? And the fact that those systems will only last a few years has nothing to do with computer SCIENCE (emphasis). The short life of some (not all at all) computer programs is much more linked to evolving business models and rules : in fact, these "business" programs are modelling human interaction (a field evolving at high speed) whereas, once the behaviour of a jet plane has been modelled, it is much less suject to time change, unless gravity law itself would suddenly mutate ! (smile again...)
By the way, i wrote 5 years ago a distributed (Europe and America) billing system running on Windows NT 4.0 and Solaris, it has been running for 3 years with only a couple of reboots for this whole period. Not that bad for
Have a nice day,
I think that the main problem with the software developed right now is that the specification is ambigous.
Most people have problems presenting non-ambigous specification and defining/voicing all of their assumptions and constraints (how often have you heard - "But that's obvious, isn't it?").
On most of the projects I worked, 80+ percent of the defects were results of misunderstood requirements or unvoiced assumptions/constraints.
I see writing software as describing and defining (in non-ambigous terms) some activity.
Building a refinery/car/airplane is (relatively) easy because the refinery works around physical - non-human - process that can be well described formaly now.
I would argue that we can write any large system we can formally specify without _any_ programming defects (but it does not equals without any feature-misunderstanding defects). Writing software for running a refinery is (relatively) easy.
On the other hand, things like medicine, law, politics (or any sort of governance, like running a business) that deal with ambiguities, very complex systems (a large number of variables with big variances) - we can't deal with these today. So how possibly can we write software that describes and defines them?
Like others I strongly disagree about the nature of software and trying to turn it into 'engineering'. But not in quite the same way.
As I see it:
1. Fundamentally, software _is_ similar to building a house or running an assembly line (etc.) in that there are a set of discrete pieces that are put together to make a whole
2. It _is_ true that in software you don't have to worry about the temperature of your 'if' statement or the structural stresses around your 'for' loop.
(1) and (2) lead most people to assume that we should aim to build software as efficiently as we build houses or run factories.
The first problem is most people have never built a house. Building a house is actually a series of problem-solving creative activities to fit the architect's drawing into the real-world problem. Any builder will tell you that if you simply follow the architect's drawings you'll fail. There is a process of constant adaption and changes during a build.
The second problem is that when we build houses we typically build exactly the same house thousands of times. This is _not_ typically true of software - every development is different from the last one. People who have self-built houses know all about cost and time overruns, the impossibility of planning it all up front, etc.
The third problem is that because of (2), most houses are fairly simple. They involve a few thousand significant parts. The building blocks are big and must be well understood if the house is not to fall down. By constrast because of (2) software is complex. A typical application may involve a million parts and many of these 'moving' - i.e. with more than one degree of freedoom.
The fourth problem is one of time and speed of change. Houses haven't fundamentally changed for centuries. New materials and techniques have come in over many years after extensive testing. In software, the very languages (i.e. bricks, mortar, wood) fundamentally change every 5 years. We've also been building houses for thousands of year, but software only for half a century or so.
The fifth problem is that people don't understand factories. The assembly line is the thing that puts the pieces together into the finished product. It is _not_ the process that develops the new product in the first place. The assembly line is analogous to pressing CDs with software on them. To develop a new product, a company uses R&D labs where (surprise, surprise) people creatively try things out, throw away a thousand prototypes and operate about as differently from an assembly line as you can get. This is what developing software is like.
So, in my view these analogies are basically right, but people completely misunderstand what that means. Trying to make software into an assembly line or mass-product housing development will only work if you are writing the same (or extremely similar) software time and time again. :-)
So many computer programmers have such a limited range of experience that they are unable to understand how to apply real world analogies (ie. building a house.). I come from a construction background and agree with you. There is a great deal of valueable understanding that can be gained from such comparisons, if properly understood.
Jaron Lanier is no exception to the disconnect problem. The difference is, that he hasn't spent much time doing "real world" programming lately -- he mostly gives speeches and does performance art. So his analogies are doubly flawed.
I would disagree with you about software projects not being as similar as building houses. 99% of software applications have already been done dozens, or even thousands of times. Maybe not by you, but by someone. The NIH syndrome as well as the youth of the industry is to blame. One business application is much like another, even though methodoligies programming languages and hardware change. Such a variation is roughly comparable to the difference between building stick frame or brick houses.
Where software methodology is lacking is in creating the equivalent of cathedrals and skyscrapers, where greater craftsmanship on the one hand, and better engineering on the other are what is lacking.
I agree that NIH is a big problem - but then, until the advent of internet, say last ten years, it was pretty impossible to share what is out there.
I think there is a couple of things that do make a difference though. As people say over and over, mankind build houses for thousands of years, so the accumulated experience is huge. This is partially offset by the ability to information share nowadays, so I'd say we won't need 1000s of years for better software :) - but I don't think ten is enough either.
Also, forces you have to deal with when building house are (mostly) directly apparent - such as gravity. Until fairly recently (last 100 years or so), by seeing a house you could more or less figure out how it was built and build one yourself. Even today, anyone interested can build a reasonable house (and yes, I've built a house with my own hands - and to my knowledge it still stands).
With more complex things - say electronics for example, it's much harder to visualise/fully comprehend all relevant forces. They are still only one class of forces though - you don't need to deal with gravity, or any of the Newton mechanics.
The software can deal with any class of forces, most of which tend to be fairly abstract.
Building software and building houses are fundamentally different, they have a different nature.
Physical sciences have a statistical nature: they require long sequences of repeatable experiments based on which we statistically extract generic conclusions/laws. Even mathematics inherit this statistical nature since mathematical objects are nothing but abstractions of the physical world (sometimes not that intuitive).
Nowadays, there are (at least) 2 fields of a different nature, non-statistical, breaking all the rules we know: genetics and programming. The main difference is that long sequence of repeatable experiments are not possible.. One may argue this is not true since in genetics we have mapped, for example, the human genome and scientists are slowly figuring out what’s is the role of each gene – of course in a statistical way. That is indeed so.
However there is a category of problems – and this is what I am referring to - which are completely unsolvable both in genetics and programming due to impossibility of building long sequences of repeatable experiments. I’ll present the problem from the programming perspective (and leave genetics for the genetic scientist)
“Let a program given as a byte sequence. The instruction set (programming language) and the execution environment (VM, processor, os, etc) are unknown (or incompletely specified). For a given input, what is the output of this program?”
In order to answer the question we must completely/exactly specify the transformations the program will generate. If not so, the uncertainties can be used to completely negate the answer to our questions. For example, if on single instruction cannot be exactly understood and known, we can find an execution environment which completely negates the program behavior (same or very closely related to the Turing problem and/or the Omega numbers of G. Chaitin)
1) what we call Computer Science is not a science: it cannot be disproved by experiment, nor it can make any predictions.
2) Software Engineering is not engineering: it is not applied science as a predictable, repeatable process.
3) Programming is an art and therefore it is non-computable, unpredictable and in most cases expensive.
3) Programming is an art and therefore it is non-
> computable, unpredictable [...]
Agreed - and thats why I love it...
While I agree that programming is not a pure science, I strongly disagree that it is pure art either. Art is basically a communication form, a way for someone to crystalise inner feelings into something concrete. Art-engineering is a continuum. Computer science (CS) applied to business requirements is much closer to science than it is from art (I'm ot talking about user interface here). How would you use CS to express sadness or anger ? I think I can do it with paintings and music but I'll find it difficult to express it in Java (maybe through creative class, method and variable names but it is quite limited). If CS was an art, its public would be very limited (restricted to other programmers). The art of CS can only reside in the way one chooses to implement a certain functionality. In any case, the art of programming can be quite obscure. Basically, all I can really communicate in a sort of artistic form is algorithmic beauty and design patterns to a certain extent. Is that enough to say that CS is an art and not a science ?
On the other hand, science requires a great degree of repetitive behaviour: the root causes should trigger the same effects. CS is a transcription into a low-level language of requirements described in a human language governed by certain rules and environments. In order to work, a program therefore should be closer to science that to art. But as you probably know, uncertainty is basically everywhere to various degrees and introduces a third concept which is commonly known as chaos. The more uncertainty, the bigger the chaos, the bigger space for human interpretation, the bigger space for creativity and thereby for art. Uncertainty comes at many different stages: a customer does not exactly know what she wants, a business analyst is not certain she understood the requirements, the loss of information consecutive to human communication interfaces, political interferences and finally the lack of technical savvy (non-exhaustive...). Lack of time and budgets adds further constraints that add to the complexity.
CS contains a set of techniques that is aimed at limiting chaos. The fact that chaos exists is not enough to discredit CS as a science. Thermodynamics, balistics, aerodynamics, quantic physics, cosmology are other sciences that include uncertainty and require chaos management (aka risk management) when used concretely. Trin Xuan Thuan (posibly with another spelling...) believes that in chaos lies beauty and a certain kind of harmony. CS is no different, this is why, I agree with you, it can be so attractive.
To meet behalves, I'd say that CS is a 90-10/80-20% science-art IMO. And plane construction must be 95-5. It's probably just a question of budget and time in the end. The level of acceptable risk is just lower for planes that it is for CS which is why creating a plane involves so much more resources: if you take into account the Pareto 80-20 principle, the 5-10% difference requires exponentially more resources.
Finally, a good computer scientist is one who can gather enough best practices to limit uncertainty (one of which is sometimes to refrain from writing 'artistic' code). But the 0-uncertainty level does not exist and probably never will.
Who on earth can just believe that planes are bug-free? They just crash less often than my Micro$ system, tht's it... And it cost a LOT more (may be it's because we are spending a lot of money just to be sure that they don't crash to much...)
Give us this money to write those bug-free program you're talking about ... And the time to do it !!!
"If you think about it, if you make a small change to a program, it can result in an enormous change in what the program does. If nature worked that way, the universe would crash all the time. Certainly there wouldn't be any evolution or life."
Science has it that life evolution boils down to tiny random DNA mutations... or did I get things wrong ? That's probably not the best example to convince people (me at least). :)
Jaron's is probably another quest for perfection. There may be interesting results in the end though the whole approach looks a tad idealistic (what about bugs that come from external elements to the software such as sun flares or magnetic storms but that still need to be addressed ?)... Oh but he's a hippy anyway! :)
Jaron makes a point about the fact that today programs are fault intolerant, and advocates a change to programs that are "statistical and soft and fuzzy and based on pattern recognition".
Won't this have the effect of making such new programs imprecise? While they may crash less and fail gracefully within a statistical margin of error, they will still be in error.
Would those engineers building complex physical real-world structures, who use computers as a tool, still use programs if they introduced further margin for error?
When using these new programs, will we have to run them a number of times, to find the result that is statistically more right than the others? Example 2x2 resulted in the answers 4 (10 times) 3.5 (1 time) 4.5 (2 times)?
Interesting interview nevertheless. I liked the idea of revisiting the past in computing. It is true that many of the things we see (and take as gospel) on a computer today are a direct result of a past limitation of computing power or hardware.
I like the idea of looking back to history of computer programming. From the first chapter of students are taught on FILE, so it acts as a constructing entity for programming. So its really need for us to rethink the paradigm of software engineering.