Improvements in Programming Languages

Started by Charles Pegge, May 12, 2007, 10:38:30 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Marco Pontello

I find this article fitting:

The Curious Mind - Levine the Genius Language Designer

Quote[...]
Would it be inappropriate to concot a version of this story called "Levine the Genius Language Designer"? The first problem in discussing language design is that we do not know the answer to that question. We do not know whether the language designers are geniuses, or we ordinary programmers are cripples. Generally speaking, we only know how bad our present programming language is when we finally overcome the psychological barriers and learn a new one. Our standards, in other words, are shifting ones -- a fact that has to be taken into full consideration in programming language design.

Bye!

Donald Darden

Hey Charles, Eros!

I just caught up with your exchange.  Great stuff.  If you want to post any code
or links, please use my download section that was just added.  Same offer for
anyone else interested in topics related to programming, development, and such.

Programming represents an extension of the algorythmic concept to the dimension of the digital machine.  We approach it with words, symbols, logic and mathematics primarily, but people are looking for other means of expression, such as audio and visual.  I believe the goal for some is to make the machines more like us, or to make it easier for us to converse with machines.

Not that I think that this is entirely do-able or even potentially useful.  Humans
are often at odds with each other over the meaning of things, and how things
should be interpreted.  I don't feel that I need to argue with my toaster on the
morning over how brown my toast should be, or which side it should be buttered on.

One thing presently lacking from most compiled languages, but I have found in some interpreter languages, is the ability to define and set variables, even to
enter formulas and mathematical expressions as part of the user interface.

For instance, if the program is currently running, and I enter a=1, then the
program would recognize "a" as a new variable, and set it equal to 1.  If I then
enter a=a+1, it would be equal to 2.  If I type ? a, then it would print 2.  I
could define functions, which would then be part of the langage by extension
and immediately useful.

One of the problems we have is the limitations of the ascii character set, which
has been mentioned.  It is hardly ideal for recognizing mathematical expressions
as commonly encountered in books on the subject.  These have to be transcribed into a form that we can handle with the available ascii symbols, and
yet we cannot support things like subscripts and superscripts, vertical relationships, and greek symbols.

The fact is, that mathematics is limited because it has only primitive ways of
expressing certain operations, and is unable to express others.  For instance,
you cannot show a way to take the integer, or fractional part of a value, or
to use the absolute value of something, or to take the log or a negative number,
or to introduce logical determination points as part of a single equation.  So
equations and functions have to be continuous over a given range.

Programming offers many new ways to examine things, and to attempt to model
data, but the inability of programming languages to adapt to the existing notation used with mathematics makes it harder for mathematicians to make
the step towards using computers.  Several mathematical languages exist to
deal with this, but I've never worked with them, so I am unsure if they deal
with another issue, which is making it easy to introduce new symbols and
operators.  AFAIK, new symbols have to be defined as names, such as the use
of SIN(), COS(), LOG(), ABS() are predefined functions, and PrintThis() might
be a user named function.  I don't know if any support the creation of new
symbols per se.

When we write a program, we generally define many new processes and use
existing built-in functions, possibly some library of functions, and likely define
some functions of our own.  The problem then is, that in essence, we are all
involved in individual and personalizing effort to extend the language into a
new area of expression or performance.  So, as a consequence, we all depart
from the underlying language whenever we create new functions, and that
may carry us into areas where others are left behind.

Libraries and shared code give us ways to try and join forces as we progress
in new areas, but this is at best a fragmented effort.  There is a lack of real
understanding of how the new functions work or what they do, what their
constraints are, how to employ them, where to find them, or even if they
even exist yet.

Once you get to a level where a language is extensible, and begin to extend it,
you are faced with a problem of managing the extensibility of the language.
What should go into expanding it, how should it be done, how to identify what
has been added, how to identify what is available for it, and so on.

The syntax of a language has a great importance in determining how user-friendly it is, how well we adapt to it, and how well we can express ourselves when we use it.  Making improvements in this area is always
beneficial.  One operator I've always felt that should be part of a language is
WHEN.  This would effectively be a callback function I suppose.  It simply
says that WHEN something happens, THEN something else should happen as a
result.  WHEN does not mean that something will happen, but if it should happen,
then you have anticipated it and have prepared a suitable response.  In the
real time world, programming is all about anticipation and consequences.
  •  

Marco Pontello

Quote from: Donald Darden on May 21, 2007, 10:05:22 PM
Several mathematical languages exist to
deal with this, but I've never worked with them, so I am unsure if they deal
with another issue, which is making it easy to introduce new symbols and
operators.
I too never used it, but Wolfram's Mathematica seems to be the finest tool of this type.
I have seen some pretty amazing stuff down using it. Some examples & discussions about its features can be found here on Wolfram blog.

Bye!

Donald Darden

#18
I guess this topic would not be complete without some reference to some of the
available sites and software out there.

"MuPAD ('Multi Processing Algebra Data tool') originated in 1990, in the purest of research: work at the University of Paderborn for handling bulky data from investigations of group theoretical structure in non-linear systems. Paderborn's MuPAD Group developed it into an open-source, cross-platform algebra system, until, in 1998, funding pressure led the Group into a separate commercial launch by the newly-created SciFace Software. I think this offshoot can be judged as having gone well. SciFace has a long-standing distribution partnership with MacKichan Software in the USA, and a further vote of confidence from MacKichan's recent decision to drop its joint use of Maple, and use MuPAD as the sole computer algebra engine for its scientific typesetting range.

"MuPAD 3.0 has been launched for Microsoft Windows 95 to XP, and offers a large symbolic and numerical command set in a standard notebook format, with typeset formula output and a 'virtual camera' graphics viewer, Vcam. For programmers, MuPAD Pro contains a source-code debugger for troubleshooting user procedures, and advanced users can add 'dynamic modules', compiled run-time C/C++ applications. The picture is similar for other platforms, except for the older versions: 2.5.2 for MacOS X and 2.5.3 for Linux."

http://www.scientific-computing.com/scwjulaug04review_mupad_maple.html

http://mathforum.org/library/results.html?ed_topics=&levels=research&resource_types=software&topics=diffeq

http://archives.math.utk.edu/software/.msdos.directory.html
  •  

Charles Pegge

#19
The WHEN statement

Donald talked about the need for a WHEN statement. That got me thinking.

As a solution to spatial and temporal sequences:

Consider the two electrical formulae:

e=i*r
p=i*e

to resolve all the values without knowing which is defined

{
  when i,r then e=i*r
  when e,r then i=e/r
  when e,i then r=e/i

  when v,i then p=i*v
  when p,i then v=p/i
  when p,v then i=p/v

  when not v,i,r,p then repeat // or until no further resolution
}

A smart interpreter will be able to rearrange the lines to optimise the execution time, according to what data is most frequently presented, and also when to give up trying!


Support for Parallel Processing


A temporal example

foundations.build()
{
  when foundations.done then slab.build()
  when slab.done then walls.build()
  when walls.done then floors.build()
  when floors.done then roof.build()
  when floors.done, walls.done windows.install(); walls.plaster()
  when walls.plaster.done, walls.paint()
  when not walls.paint.done, roof.done then repeat
}
when wall.paint.done, roof.done then house.done=1


special functions are required that initiate threads, and return immediately while allowing the thread to continue operating in the
background on the workspace provided by the object:

process .build()
  if this.busy or this.done then return
  this.busy=1
  ....
  this.done=1; this.busy=0
end process


...


Donald Darden

You grasp some of the possibilities of WHEN very quickly Charles.

WHEN and IF perform similar operations, but I had the idea that while running
through a stack of IF statements would not be the most efficient way to test for
every possibility, by linking WHEN to some external or predetermined factor, it might be possible to thread through the IF statements that are related, and ignore the intervening IF statements that do not involve the same precondition.

Certainly the idea of using WHEN [condition] THEN [what to do] should be more comprehensable to someone new to programming than CALLBACK FUNCTION would be.  But the condition could be either an external event, such as a sequence of keys on the keyboard, or a message from another process, or it could be an event within the program that is defined in the manner of a conditional test.

Anyway, it was just a thought, and my ideas of how it might work are just tentative in nature.  My thoughts keep going back to this, and it is my feeling that it offers a chance to have real world extensions added to the supporting language.
  •  

Charles Pegge


Well, polling flags in a loop is not the most efficient way of doing things but is is certainly flexible and can cope with multiple dependencies and trigger multiple actions. With a callback system, things can get quite complicated. I cant see an obvious way of using callbacks that could be applied to all situations.

In any case, most CPU time would be spent servicing the individual processes/threads. Its like a winmain message loop, and multiple flags can be aggregrated into single ones at various stages.

But a smart system, aware of process times and probable order of execution will be able to arrange processes adaptively so they execute smoothly. In a multicore system, (and the future of computing must surely be parallel processing), CPU intensive tasks can be allocated more resources, while simple but slow peripherals are left further down the list.

The difference between a IF and a WHEN, as I see it is that the system can
rearrange the WHEN statements with a block to gain the best advantage, though logically they are the same.



Donald Darden

Exactly right.  When you are dealing with IF clauses, you have to assume that the sequence of IF statements is somehow critical to the overall design of the program.  The order of IF statements must agree with the order specified within the source code.  WHEN statements could be considered independent and on a parity with other WHEN statements, and that rearranging the WHENs would not effect the general flow of the program processes.  The smart system could then attempt some way to optimize the testing or polling necessary to see if any WHEN condition is met.

Optimization could then be towards the most efficient method of testing to find any WHEN statements that might need to be satisfied.  It may also be possible to consider some WHEN conditionals to be of a lower priority or occurance rate than others.  For instance, if you are looking for a specific keystroke, you can take into account the typical length of time between key presses using the keyboard.    A general scheme to note when the last key was pressed and released may precondition a WHEN statement not to be tested for a tenth of a second or more.  An efficiency scheme could then attempt to create multiple timing or polling loops that adapt over time to include or exclude certain tests based on an evaluation scheme of their probability of occuring over a certain time lapse.  This optimization process could be transparent to the application programmer, but assures that the results will be optimized towards the greatist response rate possible within a given system's hardware, operating system, and running processes.

WHEN might even be used to wake up programs that are not currently executing in memory.  Most methods for waking up new programs seem to involve some scheduler program, and likely triggered by reaching a certain predetermined date and time.  WHEN might make it possible to preschedule program executions based on a time reference, or based on some other operational parameter.  WHEN might actually interface to existing schedulers, or have its own scheduler capability.  It might also be possible to universally examine all prescheduled WHEN programs and conditions through a separate interface related to the function of the scheduler.

  •  

Theo Gottwald

Reminds me of an old Pascal book from the university.
The If is called a "twi sided  choice" (translation) and the professor says:
"IF you omit the ELSE and you give only 1 Alternative, your programm may have a mistake".

Charles Pegge


When dealing with WHEN statements, I think we must gently but firmly say goodbye to ELSE. Once you start changing the order of execution, the logic has to be very simple and solid.

In any case, a lot of ELSEs in a program make the logic too complex to follow and unsafe to modify. I have used them a lot in BASIC and in the intricate business of parsing and interpreting code, ELSE clauses always cause trouble.

The CASE structure allows logic to be traced more easily, and altered without unforseen consequences.

{
  IF .. THEN .. EXIT
  IF .. THEN .. EXIT
...other alternatives
}

another way of doing this is with a subroutine instead of a block

IF .. THEN .. END
IF .. THEN .. END
..
... alternatives
END







Charles Pegge

#25
Assembler Using High Level language Syntax

The uncompromised specificity and efficiency of assembler combined with the easy syntax of a high level language. The best of both.

Example:

' conversion of string to upper case

'High Level Code
'--------------------------------------------
i=1; l=len(s); c=l
{
  if c LE 0 then exit
  a=asc(s,i)
  if (a GT 96) then (a LT 123) then a-=32; mid(s,i)=chr(a)
  i++; c--; repeat
}


' High Level Assembler
'--------------------------------------------
' esi indexes the string
' ecx indicates how many character bytes to convert

eax=0; push esi; push ecx
{
  if ecx LE 0 then exit
  eax = byte [esi]
  if byte eax GT 96 then if byte eax LT 123 then eax -= byte 32; [esi]= byte eax
  esi++; ecx--; repeat
}
pop ecx; pop esi


'---------------------------------------
'Low Level Assembler
'---------------------------------------
!  mov eax,0
!  push esi
!  push ecx
repeats:
!  cmp ecx,0
!  jle exits
!  mov al,[esi]
!  cmp  al,96
!  jle  iterinc
!  cmp  al,123
!  jge  iterinc
!  sub al,32
!  mov [esi],al
iterinc:
!  inc esi
!  dec ecx
!  jmp repeats
exits:
!  pop ecx
!  pop esi
'----------------------------------

Theo Gottwald

Modern compilers have already reached the ability do do quite good optimization on simple code.

The question is similar to the Chessmaster-question:
When will compiler be able to produce faster code from algorhytm then a human ASM-programmer?

You may say "Never", but take a look at chess-history.

The computer has the advantage, that he is able to calculate latencies, processing-times, averages and memory-waiting-cycles much more accurate then a human programmer.

In the near future it may be like in chess:
There is a compiler thinkable that makes faster code then a human programmer.

Looking at compiler-technology in terms of optimizations, the Intel C-Compiler looks to me ahead of competition, but even then is far from beeing a "Compiler-Chessmaster".

Marco Pontello

Speaking of the limits of optimization, a thing that come to mind is also dynamic, runtime optimization. A whole new class of opportunities for optimizations open up considering the "live", running program. Certain VM can do pretty remarkable things, in some situations, in this regard.

Check this:
HP Dynamo - Transparent Dynamic Optimization
QuoteDynamic optimization refers to the runtime optimization of a native program binary. This paper describes the design and implementation of Dynamo, a prototype dynamic optimizer that is capable of optimizing a native program binary at runtime... Contrary to intuition, we demonstrate that it is possible to use a piece of software to improve the performance of a native, statically optimized program binary, while it is executing. Dynamo not only speeds up real application programs, its performance improvement is often quite significant. For example, the performance of many +O2 optimized SPECint95 binaries running under Dynamo is comparable to the performance of their +O4 optimized version running without Dynamo.

Bye!

Theo Gottwald

And thinking in terms of "Playing Chess" even these "dynamic Runtime Optimizations" are just the beginnings of what is possible.

Actually these optimizations are for aspecial architecture mostly.
A next step could be to simulate diffrent available processor-architectures and take the best code combinations.

But its like playing, once you need to play stronger, you develope better strategies.
Simulating the Opponent is long time normal in Chess. But they do more.
They even give numeric weight diffrent combinations in the game and then they try to greater depth of changes.
The compilers have just started to learn "playing", which is indeed diffrent from just "statically" replacing Text-Combinations with ASM Combinations - more or less out of predefined tables.

Charles Pegge

One optimisation that is quite hard fir programmers to do but much easier for the system is working out parallel paths of execution. The x86 already has register level parallel processing and branch prediction in conjunction with shadow register sets. SIMD extensions allow arrays of registers to be processed on a single instruction. The operating system can allocate threads to several processor cores. Google does its work over the internet with massively parallel computer networks.

When we start using the full 64 bit mode on the x86's the most useful feature will be the extra registers,  so there is less need to push your pawns onto the stack for passing parameters to a function.

But the hardware guys seem to be well ahead of the software guys when it comes to generating computer power.