Improvements in Programming Languages

Started by Charles Pegge, May 12, 2007, 10:38:30 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Donald Darden

Optimization is an elusive goal, because while you can refine code to work better in terms of implementing a specific algorythm, it may be that a different algorythm would serve better overall.  For instance, which would normally be more effective, converting all characters to upper case, all characters to lower case, or just deal with each character in the case that you find it?  If you assume that you are working with standard text, then you might also assume that the vast majority of your characters are already in lower case, so less time is required to change all characters to lower case.  But if you were examining a body of code, you might find that all your key words are already in upper case, so in searching for key words, it might be best to treat each letter in the case that you find it.

Optimization by machine can only compare different approaches to doing the same thing, and determining which method is most efficient.  But a problem immediately arrises, because the machine has to discern what you are attempting to accomplish, and this is really beyond its powers. A programmer who prides himself on writing optimized processes may expect you to try certain things, then design its optimization methods to substitute one obvious way that you might adopt with a less evident way that is actually better.  But that would be one programmer attempting to enhance the work of another, not something that the computer would undertake on its own.

While 64 bit programming may introduce additional addresses and more instructions, the downside is that the OS will still just be giving you a sandbox to play in, forcing you to learn how to play nicely with other running programs and processes.  You will find that you still have to save registers in order to free them up for your own use, and to restore them afterwards.  It's like learning to drive on a two lane road, then suddenly having to travel through the heart of a city on a roadway twelve lanes across.  You don't have the freedom of having all those lanes to yourself, you have to allow for other vehicles whizzing around you as well.

What you might hope for is an expanded set of instructions that show some insight by the designers as to what really needs to be done in software, and better tools for that purpose.  But then there is always the question of legacy support for the older architectures, and whether you want your code to work on existing 32-bit and 15-bit machines or not.  You may be forced to forego the use of really advanced features, or you may not even be able to access them because your compiler/assembler may not include that support, or you may not have any  supporting documentation to describe them or how to access and use them.

It's generally understood that software development lags hardware design by at least five years.  The hardware guys make it and give it to the software guys, and then the software guys struggle to figure out what it is good for, and how to get the most out of the design. 

We've all seen or read claims that DirectX 10 will change game development,
and then maybe a year later, some titles that use DirectX 10 will begin to show up on store shelves.  At the same time, few video cards are able to support
DirectX 10 yet either.  So how does this fit in?  Well, even in the hot and heavy
game development market, it takes time to take advantage of new technology.
The pressure to bring new games to market is emmense, with major bucks involved, so this is just a super paced example that proves the point.

But hardware is not the only thing that evolves, and software does not limit its rate and growth of development to changes in the hardware.  New languages and tools are always appearing, new books appear to explain them to us, and new skillsets are expected of us, sometimes almost overnight.  I recall one job posting that wanted five or more years experience in a new language that had only become known commercially the year before.

The fact is, if you took all the possible languages, libraries, tools, and everything else now available to the programming community and sturred them all together in their many thousands, then cut a narrow slice to represent your ability to know and have experience some of them, then what are the chances that your narrow sliver will exactly coincide with another sliver that represents the job skills and experience being requested by a job posting somewhere?

This is sometimes the advantage of the independent developer.  He can only bring to the job the things he (or she) had experience with, so the job, whatever it is, will be defined in those terms.  If you end up having to be replaced on the job, the likelihood is that the search will be for someone with your same qualifications.  Again, the improbability is that another person exists with exactly the same background that you have.

These are just observations that I've made.  I've also noted that we often do not choose the tool best suited to the job, but best known to the programmer or to the person identifying the requirements for the job.  We realize that the time and effort to retrain and get up to speed is prohibitively costly, and needs to be avoided wherever possible.  So demands for specific skill sets in the right combination with each other will continue.  And some combinations will be of greater value, and in greater demand, than others.  It can also happen that the more identified you are to a certain type of job or position, the less well suited you may seem for other jobs or positions. 

Kent Sarikaya

Chances are anyone following this thread would be interested in this wild topic about functional languages and their power.

Even a dummy like me, could almost understand it all. Well I could follow along while he presented the topic, but in no way could I work my way through it on my own :)

I still don't see how it simplifies complexity, it just seems you could do similar things in any programming method, which he shows via c#.

Charles Pegge

Thanks Kent, I managed to get about half way through this video, before my brain went fuzzy. He is trying to convey some important principles in functional programming but the puzzle is how to apply them in a real project.

So I've been looking for material that relates functional programming ideas to the world of computer graphics and other complex areas of application. This is the best I've seen so far.

Tangible Functional Programming

'We present a user-friendly approach to unifying program creation and execution, based on a notion of "tangible values" (TVs), which are visual and interactive ...'

Kent Sarikaya

Amazing follow up video to the previous one. I guess they are dynamic duo of videos to a whole new world of thinking and working.
They definitely go hand in hand I think really well.

Charles Pegge

Parametric polymorphism

This will give you a flavour of how Functional programming theorists think. The speaker deals engagingly with a very abstract subject, and elucidates the mathematical and philosophical origins of Type theory and polymorphism  Starting with Gottlob Frege over a century ago through to the present day, with languages like ML Ocaml and Haskell.

In the Principia Mathematica, attempting to prove the validity of mathematical concepts, Russell and Whitehead took 400 pages to prove that 1+1=2. But this was done without the assistance of a computer. :)

I am sure I have not grasped the full significance of this talk, but it's something to do with making program design more robust by eliminating ad hoc design decisions.

Kent Sarikaya

Thanks Charles, I had seen that video before, but it was way beyond what I could grasp. But now seeing the other 2 videos, I can understand the gist of it better, but it is still way beyond me. Right now I see it more as a programming flow diagram, looking at these videos, and very hard to put into actual usable code that I can follow :)

Charles Pegge

This stuff comes from a notion of pure mathematics. As I see it maths is all about patterns, no more no less, so I am resolved not to be intimidated or bamboozled by obscure language. It would help if these guys used more evocative words and some practical examples, then many more people would understand what they are talking about,.

The sense I get from these talks is that functions should be simple, unbreakable and as compatible as possible so that they can be used in any combination as long as the parameters are of the correct type.

The functions in functional languages have the additional feature of being able to receive and return other functions as though they were variables, giving another dimension of flexibility.

Kent Sarikaya

Wow your description really encapsulates in a clear fashion of what was going on in those videos. What a great way to explain it!!
Once you master this, you will need to be the one to bring it to the masses to understand it with nice easy examples to follow.

Good luck on the studies and adventures into this dizzying world!

Charles Pegge

We could try this functional approach in the 3D world. Here is a way of describing the surfaces of various shapes:

All these expressions equal 0 when any  point (x,y,z) lies on the surface of the shape. These seem to be brain teasers at first but are potentially very useful. Can you work out the expression for a corrugated surface?

Infinite horizontal plane



x^2 + y^2 + z^2 - 1

These require more than 1 null expression to describe them


x^2 + z^2 - y


x^2 + z^2 -1

Donald Darden

I think it is safe to say that language is the tool that give us the mental lift to stand tall enough to perceive new ground for our thoughts.  We postulate what we can see as a new approach, term it, then teach the concept and the terms until we are familiar with both, and look for situations where this new approach seems to bear new fruit.  If we find any, we consider this validation of the whole process.

During a certain period of art, figures in paintings were sized according to their perceived importance.  When perspective was introduced in a painting that appeared to be a mirror reflection or a town square, it took the art world by storm.

With the advent of digital circuits, we had to begin thinking in terms of binary
arithmetic and logic, of absolutes rather than vague quantities such as "few" or "many". or "most probably" or "least likely".  Yet at the same time. we progressed from "things" to the concept of "information" about things as beng equally valid.
Do originals really have some intrinsic value of their own that make them worth
what some people will pay for them, or is there some illusion involved that helps distinguish the original from mere duplicates?

Some computer languages strive to go ever higher in form, separating themselves from the mundane of actual computer hardware and how it works,
while other languages strive to bring us closer to dealing with how computer circuits actually work so that we can achieve greater perfection in planned performance.  Many languages strive to be more "natural", by which we mean
they reflect the way we've learned to talk and think and conceptualize things,
and the hope of some is that language will eventually lead us to the point where
we cannot distinguish man from machine.

These are interesting ideas and notions, and there is no doubt that we are making inroads on many of the problems of creating machines that we can communicate with, and with the way we communicate with them, and they with us.  But that does not signify that we can really elevate machines to the same plane that we have arrived at, partly because we do not really know how we got here, and also because being human and subject to mankind's limitations are not reasonable design goals for creating new machines.  It is too easy to acquire
humans directly if that is what you ultimately want.