Today is...
Tuesday, December 12, 2017
Welcome to Control.com, the global online
community of automation professionals.
Featured Video...
Featured Video
A tutorial introduction to programming using the QuickBuilder Programming Environment.
Our Advertisers
Help keep our servers running...
Patronize our advertisers!
Visit our Post Archive
After Software, What's Next?
Reliable hardware is solved but not software. If we are able to produce a hardware solution to most control problems, who will be its champion?
By Charles Moeller on 27 January, 2012 - 6:30 pm

During six-plus decades of adherence to the Turing paradigm, the computer field has reaped the benefits of ever- faster and -denser (and -more reliable) hardware. Over the same span of decades, the creation and maintenance of software hasn't gotten any easier and remains problematic, especially in matters of implementation, integration, and system safety.

Computation, as a technique that was formulated to solve cryptography via instruction-dominated symbol-swapping, may not be the most appropriate means of monitoring and controlling real-world physical processes, yet that's what 98% of the billions of microprocessors and their derivatives fabricated each year are made to do.

The tangled threads of linear-sequential operation tend to inhibit each other and may cause faulty operation. After these decades of experience, hasn't a better way been developed? Even the "massively parallel" solutions are processors that have been slaved to operate in lock-step, but they are each linear-sequential systems at their cores: shared-resource hardware arranged to manage data in spatial memory addresses via step-by-step software instructions.

We can do better with an alternate technology but if we could, who would be its champion?

cmoel888@aol.com

By Curt Wuollet on 28 January, 2012 - 6:57 am

This is the wrong forum to ask about change. most are strongly attached to doing or at least modeling control with relays:^) It seems Automation today is about the easiest way to control things rather than the best.

And most decry the procedural programming language stepping stone between mind and hardware. Assuming we don't veer off into massively parallel nanocomputers or chemical computing, I think the next step would be for our thinking machines to replicate a system from a description to produce an ASIC with logic built in or something on that order. Who will champion that? IP houses like ARM or the silicon giants.

That's my guess

Regards
cww

By Charles Moeller on 28 January, 2012 - 12:04 pm

Thanks for your thoughts, CCW.

Relays are good for concrete conceptual understanding: ladder diagrams and logic are straightforward.

Existing computer languages, both functional and procedural, are restricted to linear-sequential thought and operation. Life is parallel-concurrent.

Give a computer the task of invention and you may get somewhat better, but generally more of the same old stuff. The best thinking machine for generating new ideas is the human mind.

Our logic has us thinking in flat, two-dimensional channels. Time is translated to data in space and all processes are done through combinations and sequences of AND, NOT, and STORE.

What is needed: a parallel-concurrent language and logic that describes physical processes in such a way that the same description is the exact specification of the controller for each process.

CharlieM
cmoel888@aol.com

Right, the world is multi-threaded with both parallel and sequential operations. That's why Sequential Function Chart programming was created. It is one of the IEC 61131-3 "languages" and is very much underused. It was created by Telemechanique and called Grafcet. Learn more about this powerful concept.

Dick Caro

Dick Caro:

Thanks for the tip on Grafcet. I'll look into it.

What I expect to see, however, is another one of the accepted "languages" that are characterized by, based upon, and require, a Turing-type mechanism (TM) to do the data-processing. That restriction limits any resulting parallel-concurrent operations to those performed in a linear-sequential manner.

No real change in the fundamentals.

What is needed is something really different, some new thought of how to go about controls and not constrain every control problem to those that can/must be performed by TMs.

cmoel888@aol.com

By Curt Wuollet on 29 January, 2012 - 1:37 pm

One has to consider though that we got here through parallel, though modest means. Back to the future a control system often had logic for each individual function. Relay logic was like that. And before microprocessors, digital control was like that. You had boards of gates that could combine signals where needed but much was done asynchronously. It could be interesting to troubleshoot to say the least. But if you look at it and squint a little all a PLC did was replace the rack of cards with a box that implemented the same logic. That is to say, it was not a compute engine as such, just a cheaper, more reliable replacement. So to mimic your earlier questions:^) Do we care what's inside the box? Do we need a PLC with guts a zillion times faster or more esoterically correct. Actual computing on the devices is in it's infancy which is why I'm fond of using a "real" computer in the first place, given a non trivial need. Your emphasis seems to agree with my approach that its' a lot easier to add PLC functions to a full feature computer than to add the functions to a PLC. I think that's why there are now PACs. I will play the devil's advocate and ask: What problem are you trying to solve? Illuminate please:^)

Regards
cww

CWW:

> Do we care what's inside the box?

I think so. I would much rather have a directly-connected, parallel-concurrent system monitoring and controlling my physical process in real time. The alternative: hardware signals handed off to a TM, data-processing, handed back to hardware is slow, unsafe and complex, is what we have now. When you start considering safety-, time-, and mission-critical applications, there is even more reason to select a parallel-concurrent model.

> I will play the devil's advocate and ask: What problem
> are you trying to solve? Illuminate please:^)

The problems I am looking to solve are those of:
cost (to design and build, and to own), safety, complexity, maintenance, and understandability.

Regards,
CharlieM

By William Sturm on 30 January, 2012 - 9:38 am

< Charlie said: "The problems I am looking to solve are those of:
cost (to design and build, and to own), safety, complexity, maintenance, and understandability" >

The number one problem I think is not technology, it is marketing.  There has been many great ideas that never get off of the ground. 

Of course, the right idea at the right time can be a powerful thing!
 Bill Sturm

By Timothy P Niemczyk on 7 June, 2013 - 9:01 pm

Let me know if you solve the marketing problem. We have a non-programable solid state solution to monitoring of inputs and outputs but marketing seems to stop forward progress. It is difficult to get industry to look beyond the the PLC market for a more secure solution.

> The number one problem I think is not technology, it is marketing.  There
> has been many great ideas that never get off of the ground.

By Charles Moeller on 8 June, 2013 - 11:59 am

At long last, having solved the technical end, I am now working on the marketing problem:

I stumbled on a method that originated in Inventors Assistance League International, in which they suggest precise steps with which to perform a marketing campaign, not to the ultimate user, but to the manufacturer(s) or licensor(s). Such target (large) organizations would have the resources, once they are convinced of the item's/method's benefits, to carry the work forward.

The method leverages your ideas rather than having to do all the work yourself.

Can you describe the benefits of your "non-programmable SS solution"?

> Let me know if you solve the marketing problem. We have a non-programable solid state solution to monitoring of
> inputs and outputs but marketing seems to stop forward progress. It is difficult to get industry to look beyond
> the PLC market for a more secure solution.

>> The number one problem I think is not technology, it is marketing.  There
>> has been many great ideas that never get off of the ground.

Look at Grafcet or the Sequential Function Chart language of IEC 61131-3. It is a top-down graphical/charting specification where the objectives of each block are defined. Each block can then be written in its own SFC, or in any of the languages of 61131-3. This is a parallel process model, NOT a turing machine model. Look first, then please share your impressions. It's NOT what you expect.

Dick Caro

Correction to original message - http://www.control.com/thread/1327707041#1327864914

NO. Leave this alone and issue a correction.

Alan Mathison Turing
Born 23 June 1912
Maida Vale, London, England,
United Kingdom

Alan Turing is considered the father of the modern stored program digital computer. The mechanism in his invention is called a Turing Machine.

Dick Caro

Original message with moderator's question removed:

Look at Grafcet or the Sequential Function Chart language of IEC 61131-3. It is a top-down graphical/charting specification where the objectives of each block are defined. Each block can then be written in its own SFC, or in any of the languages of 61131-3. This is a parallel process model, NOT a turing machine model. Look first, then please share your impressions. It's NOT what you expect.

Dick Caro:

> Look at Grafcet
> Look first, then please share your impressions. It's NOT what you expect.

I looked and found GRAFCET "In 1988 it was adopted by the IEC as an international standard under the name of «Sequential Function Chart» with reference to the number «IEC 848». Translators have existed for many years, to implement GRAFCET on real time computers or programmable controllers."

Fuzzy logic, Petri nets, Neural nets, and Grafcets are all presently implemented on computers or clock-driven state-machines. This is one of my gripes: that advancements that should provide tremendous benefits are funneled through the constraints and impediments of TMs, thus picking up the restrictions and characteristics of the data-processed model.

Regards,
CharlieM

By Russ Kinner on 30 January, 2012 - 1:42 pm

i presented a paper ath the 14th International Programmable Controllers Conference in Detroit in 1985. William Keller of Joucomatic Controls presented a paper in the same session I was in titled "GRAFECT, A Functional Chart of Sequential Processes". That was my first introduction to Grafect and I thought it was a great concept back then but was disappointed that it really didn't catch on in the US.

I doubt the Engineering Society of Detroit has the paper on line but you might search around for it. I didn't see any of the IPC conference papers listed when I checked Google.

Unfortunately, many of the early papers in the PLC world are not available - we risk the same thing that has happened all to often; there is no monetary incentive to make available papers and as they are all covered by copyright, no one else can legally make them available. The only copy I know about is my marked hardcopy that is slowly growing old on my shelf.

Maybe someone on the list is a member of ESD and can see if the old conference proceedings are or can be made available. I'd be happy to contribute my copies to the cause if they could be put on line.

Russ

Great thread, Charlie!

Back in the 1990s, Dick Morley hosted a series of "Chaos Conferences" in Santa Fe which examined alternatives to traditional procedural or combinatorial control. One of the major underlying themes was to look for hidden order in seemingly stochastic processes (hence "Chaos") and gain insights from that order to develop better control strategies. The conferences followed the work of the Santa Fe Institute on so-called Chaos Theory, the study of processes which, although deterministic, were highly sensitive to differences in initial conditions -- weather is the typical example.

Along the way, we looked at alternative programming methodologies with which various companies were experimenting at the time, including:

- Neural networks - essentially trying to replicate the methods of the brain, these create outputs as a function of inputs, often through a "training" process instead of explicit programming.

- Fuzzy logic - acts on input variables based on degrees of truth (or, as we say these days, "truthiness"). So, a sensed temperature could be too cold, too hot, or fairly hot, kinda cold, etc. If this sounds a bit like a linguistic PID, well, maybe it is, but as a mode of expression it may be closer to how we think about things.

I'm not sure whether either of these approaches became widely used in industry. I know that Omron was touting fuzzy logic based controllers for a while, but haven't heard anything about these lately. Neural networks were beginning to be used in research and analysis settings, but their opaque nature made people nervous about using them in control (so, what EXACTLY did the neural network "learn"??).

I have my own ideas about how best to model real-world applications in the structure of a programming language, to reduce the distance between how we think about our actual processes and the arcane contortions we have to go through to program them. I've been working on an article on this very topic to kick off a blog I'm starting, and will provide a link on Control.com when it's launched.

In the meantime, thanks for an interesting discussion!

Ken Crater
Founder, Control.com
ken@control.com

Thanks for the encouragement, Ken.

I am enjoying this forum.

Regards,
CharlieM

Ken,

The fundamental problems of TM-dominated systems persist-and will do so for as long as we insist upon solving all our problems via the Turing paradigm: shared resource hardware, software, and linear-sequential operation. I've intimately experienced four decades of "progress," including advances in the millions-fold in hardware capabilities. These meaningful improvements have resulted in Moore's Law effects that have made computing available to everyman at reasonable cost. But a software crisis exists, ongoing prior to, and since, E.J. Dijkstra named it in 1971. We know that software can't cure the ills of software because it hasn't done it in the last 60-plus years-and not for the lack of trying.

My thesis is that the problems of software can be cured by smarter hardware.

By William Sturm on 30 January, 2012 - 9:34 am

< Charles said: "My thesis is that the problems of software can be cured by smarter hardware" >

That is an interesting topic, it seems to me that the typical machine/assembly code has not made any great advances in decades.

An interesting idea is silicon that has a high level language as it's native machine code. Some excellent examples are the various Forth processor chips out there. They have never become mainstream, but they appear to have some very nice capabilities. They are a great example of hardware that is specifically designed for software.

If you dig around in these sites, you will find a lot of interesting ideas:
http://www.ultratechnology.com/
http://www.colorforth.com/
http://www.offete.com

Along the same lines, there are also some native Java bytecode processors out there:
http://en.wikipedia.org/wiki/Java_processor

Another interesting hardware item that comes to mind is the Parallax Propeller chip. It has eight 32 bit cores each with their own timers and a global shared memory space. The idea is to eliminate interrupt service routines, you can assign a single core to service a high speed event. It enables programmers to write their own peripheral functions in software. Except for timers, there is no typical peripheral hardware on the chip.

You can read all about it here:
http://www.parallax.com/

I hope that you find these links to be helpful...
Bill Sturm

Bill:

>> Charles said: "My thesis is that the problems of software can be cured by smarter hardware"

> That is an interesting topic, it seems to me that the typical machine/assembly
> code has not made any great advances in decades.

Right! The whole idea of code is instructions to shuffle data from one location to another, while keeping track of from- and to- locations. Transforms, substitutions, and translations can be performed as well. The sources of data are the input sensors. Once the data are separated from the real-time process by sampling and storing, there is no other recourse but to data-process. My approach is to avoid this step as much as possible and treat (not data-process) the sensor information in their native space-time-immersed states and in real time, not after stale and sterile representative data is retrieved from memory in some subsequent step.

> An interesting idea is silicon that has a high level language as its native
> machine code. Some excellent examples are the various Forth processor chips
> out there. They have never become mainstream, but they appear to have some
> very nice capabilities. They are a great example of hardware that is specifically
> designed for software.

I have played with Forth. Interesting side-trip but still a data-processing method.

> If you dig around in these sites, you will find a lot of interesting ideas:

Thanks for the links.

The Parallax Processor approaches a processor per sensor. That's more like where I'm headed with "mostly hardware," without the processors.

>> Charlie said: "I see software as an unneeded and complicating factor. One
>> needs both software and hardware experts, rather than just hardware
>> ones."

> I see little difference fundamentally >between hardware and software.
> Algorithms are algorithms, they can be >designed into hardware or software, only
> the techniques are different. Like >relay logic and a PLC, they can both
> accomplish the same task. Same with >silicon vs. software.

Bill, you and I have totally different realities on hardware & software:

Hardware is essential. It performs all the logic and arithmetic operations, and the reception, decoding, and storage of sensory information and data. Every effect created by a hardware-software combination is initiated in and executed by hardware, not software. Hardware even houses (control store) and paces (instruction counter) the software instructions. Hardware is indispensable. Controllers can't work without it. It is not the case that hardware is dependent upon software for functionality. It is the other way around. Software depends upon hardware to code it, house it, access it, step through it, and implement it. That turns out to be a benefit, because hardware can be tested with finite resources, while software testing may never end. "Complete testing of a moderately complex software module is infeasible. Defect-free software product can not be assured."[1] "Software essentially requires infinite testing, whereas hardware can usually be tested exhaustively."[2]

1. "Software Reliability" by Jiantao Pan, Carnegie-Mellon University, 1999
http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/
2. Overview of Software Reliability
http://swassurance.gsfc.nasa.gov/disciplines/reliability/index.php

Best regards,
CharlieM

By Charles Moeller on 16 February, 2012 - 7:21 pm

Bill:

>> Charles said: "My thesis is that the problems of software can be cured by
>> smarter hardware"

Bill said:
> That is an interesting topic, it seems to me that the typical machine/assembly
> code has not made any great advances in decades.

The fundamental logic has not changed much either, nor have computer activities. All higher-level computer languages (i.e., in software) are ultimately decomposable to, hence built up from, sequences and combinations of the Boolean operations (AND, NOT, and combinations) and STORE. In machine language, those operations are used to determine explicitly: a) the locations from which to acquire the numerical or conditional operands, b) what Boolean operations to perform, c) where to put the results, and d) the next step in the program. Every step is, and must be, predetermined. At bottom, that is all a computer can do.

AND, NOT, and STORE. That's three words. Imagine writing a newspaper article (or describing a dynamic process) using repetitions, variations, and combinations of only three root words. So few allowable words or operators (simplistic logic) forces structural complexity and is one of the reasons that software is troublesome. We can simplify the code by having more sophisticated primitive operators in the foundation logic, especially in the time-domain.

If there are but few basic words or operations in any language, then it takes lots of them to put together a process description, or story. Sentences may get very complex due to the repetition of those few words/operators in existence. My approach to a more robust and healthier logic system was to identify the words and operations to expand the logic experience, with emphasis on the relations of time. Additionally, I devised a temporal logic that is native to the time-domain, apologies to A. N. Prior. Conventional logics must translate temporal-domain signals into data suitable to the space-domain where they can be manipulated by static operators. PTQ does not need to translate from the time-domain to the space-domain for logic operations, then translate back to the time domain for useful output. PTQ can determine the truth value of temporally related events and conditions on-the-fly, as they occur in real time, with resultants available within a few gate-delays.

> An interesting idea is silicon that has a high level language as its native
> machine code.

My process-control language specification of a process translates directly to hardware that monitors and controls the process.

Best regards,
CharlieM

By Bruce Durdle on 16 February, 2012 - 11:11 pm

I can't let this go unchallenged!

Charles said:
> The fundamental logic has not changed much either, nor have computer
> activities. All higher-level computer languages (i.e., in software) are
> ultimately decomposable to, hence built up from, sequences and combinations
> of the Boolean operations (AND, NOT, and combinations) and STORE.

So? That's not necessarily a limitation - it's a fact of Boolean life. Similarly, I could state that "All arithmetic operations are made up from additions and complements" - that's two basic operations. It doesn't stop me from using a defined combination of these to do multiplications, logarithms, or trigonometric functions - I don't have to explicitly use multiple additions to get these results.

Bruce.

Bruce:
>Charles said:
>> The fundamental logic has not changed much either, nor have computer
>> activities. All higher-level computer languages (i.e., in software) are
>> ultimately decomposable to, hence built up from, sequences and
>combinations
>> of the Boolean operations (AND, NOT, and combinations) and STORE.

Bruce Durdle wrote:
> So? That's not necessarily a limitation - it's a fact of Boolean life.
> Similarly, I could state that "All arithmetic operations are made up from
> additions and complements" - that's two basic operations. It doesn't stop me
> from using a defined combination of these to do multiplications, logarithms,
> or trigonometric functions - I don't have to explicitly use multiple additions to get these results.

The conventional systems of logic can only perform operations that recognize or manipulate static values residing in space. Ordinary logic treats variables in time poorly through the constraint of having to translate all temporal signs, signals, and effects into the space-domain so as to be suitable for static combination and arithmetic rules.

The static systems of logic-descended from Aristotle through Boole [1], Frege, Prior, Pnueli, and all modern logicists and natural philosophers-are lacking in several respects. These many models of logical specification [2] are unable to describe or to create any more than was given (sum of the parts), they can't directly express causation (which instead must be humanly divined from static representations), and they can't be used to directly express or treat dynamic or changing scenarios, thus they can't deal directly with ongoing time or processes that evolve with time. Now these observable attributes, including synergy or emergent behavior, cause and effect, dynamic activities and ongoing time, are very evident in the real world. Life would not have survived as well as it has without recognition and beneficial use of these attributes of reality.

One of the troubles with philosophy, logic, "computational intelligence," and other systems of thought is that the formal logic used to specify and substantiate or support concepts and systems is confined to static frames in the space-domain. All temporal information, therefore, must be referred to tokens and labels situated in space. These items of information are made into data, by sampling and storing, after which the only recourse is mechanical data-processing via Turing-type machines (TMs) or manual methods. The ancients played with and dealt with concepts by writing them down and by thinking of them as fixed conditions. We can now do the same using computers, but the logic operators being used have not expanded or grown with the passing of millennia. We are thus limited to combinations and sequences of AND, NOT, and STORE. The whole of computer science is founded on those few operators. First-order and modal logics are fundamentally static means through which actions are reckoned from fixed statements or frames, evaluated after-the-fact. Such static treatment, even aided by super-fast computers, often fails to produce results appropriate for dynamic processes.

Using static and fixed labels, formal logic discourse admits only of existence, both presence and absence aspects, and conjunction (coincidence in both space and time). This package of restrictions in thought excludes dynamics from that frozen arena. But life and other processes exhibit change and self-motivated activities. How can such functions be specified or even explored with logic that allows only static states or static labels about dynamic states? Aside from how a condition or process is and how it relates to other things in tableaux, we want to know or be able to precisely and concisely specify how it came to be, what caused it, and how it acts. There isn't any such treatment in formal logic, although in ordinary language we routinely express dynamics in a way that most understand our meaning.

So, isn't it time for a change?

1. George Boole's The Fundamentals of Thought
2. About thirty "non-standard" logics (aside from predicate calculus and propositional logic) are listed in http://www.earlham.edu/~peters/courses/logsys/nonstbib.html

Best regards,
CharlieM

Reference 2. in the previous post should be:

http://www.earlham.edu/~peters/courses/logsys/nonstbib.htm

Best regards,
CharlieM

By Charles Moeller on 28 January, 2012 - 3:24 pm

CWW, you hit the mark with your suggestion ..."replicate a system
from a description to produce an ASIC with logic built in or something on that order."

Complexity is all the rage, these days.
FPGAs would be an easy trial route ahead of ASICs.

My points:

1. Do we really need to program our toasters from our vacation cities, or is the vastly increased functionality built-in just because the facilities are easily available?

2. Do we really need such as LabView to create the simple functionality that most industrial applications and consumer electronics require?

CharlieM
cmoel888@aol.com

By Curt Wuollet on 28 January, 2012 - 5:45 pm

No and no.

The first is it's own answer. People will do it because they can. The second can be be done much more efficiently with a much smaller simpler processor but it's not as easy. When cost and volume dictate they generally are done that way hence embedded systems. But not just anybody can do it. Do you need 40 billion lines of code to emulate a terminal? No, but people do it all the time. The largest single product of the software industry, bloat, has everything to do with easy and nothing to do with good. Each successive generation of products do the same task with 10 times the software. It's Boars law.

Regards
cww

CCW:

Why use a processor for simple tasks? The logic for most control tasks can be done in less than 100 gates, so why use even a 10,000-gate processor when a small FPGA will do the job?

cmoel888@aol.com

By Curt Wuollet on 29 January, 2012 - 1:53 pm

Why indeed? We used to do things with the 40 gates. It probably has something to do with finding an electrician who is interested in hardware description languages. My interest in having a processor core is because I do things that are now becoming practical in an inexpensive SOC. For example: I can see a sensor that uses machine vision techniques for the price of a dumb sensor. It's tremendous overkill to use an ARM core in a sensor, But who cares what's in the box. Power, Ground, and an output. One can distribute intelligence all over the system for the price of one PLC. We've already started down the road of systems being collections of specialized functions, Motor drives, etc. etc. At some point the software will of course, still be there, we just won't see it,

Regards
cww

CWW:

I see software as an unneeded and complicating factor. One needs both software and hardware experts, rather than just hardware ones. Hardware can be exhaustively tested, whereas software testing may never end (and you still ship with bugs for the user to find).

Regards,
CharlieM

By William Sturm on 30 January, 2012 - 9:47 am

< Charlie said: "I see software as an unneeded and complicating factor. One needs both software and hardware experts, rather than just hardware ones">

I see little difference fundamentally between hardware and software.  Algorithms are algorithms, they can be designed into hardware or software, only the techniques are different.  Like relay logic and a PLC, they can both accomplish the same task.  Same with silicon vs. software.
 Bill Sturm

By James Ingraham on 30 January, 2012 - 11:29 am

CharlieM: "I see software as an unneeded and complicating factor."

Don't we all. :-)

CharlieM: "Hardware can be exhaustively tested, whereas software testing may never end (and you still ship with bugs for the user to find)."

Disagree. (a) Hardware call still be a problem. Note the 737 rudder issue. That was hardware. So was the Tacoma Narrows Bridge. (b) Moving the software complexity into the hardware just moves the problem rather than solving it. Using PLCs as an example again, imagine 10 relays controlled by ladder logic. If there's a software bug, you yell at the programmer. But without the software those relays don't *DO* anything. They just sit there. If you replicate the logic in the PLC by wiring the relays in a massively complicated way you will just as likely have bugs. And they will be MUCH harder to find.

-James Ingraham
Sage Automation, Inc.

By Bruce Durdle on 30 January, 2012 - 2:09 pm

I second James's comment re wiring faults in relay systems being hard to find - and you can add in a whole slew of additional faults caused by minor differences in timing between so-called parallel relays, for example. But to me the major difficulty is not in whether the control system does what is wanted - it is in working out beforehand what we really want it to do. For some systems this can be relatively simple, but for even a moderately complex application the number of combinations and permutations can be appalling. For an automated start-up system, for instance, there is one right approach which will basically follow some pre-ordained design intent, but the number of wrong pathways that can be taken is very high - and you can guarantee that the one you decide is not an issue is the one that will bite you.

I have found the SFC approach very handy in both defining the decision paths through this type of problem - you can ask the mechanical or process engineers what should happen if the firing speed is not reached in 40 seconds - and in making sure that there is an operational cycle followed which ensures that any actions that have been done are undone at some point.

And don't forget that any control system will also have a wetware component.

Bruce.

>> CharlieM: "I see software as an unneeded and complicating factor."

>> Don't we all. :-)

>> CharlieM: "Hardware can be exhaustively tested, whereas software testing may never end (and you still ship with bugs for the user to find)."

> Disagree. (a) Hardware call still be a problem. Note the 737 rudder issue. That was
> hardware. So was the Tacoma Narrows Bridge. (b) Moving the software complexity
> into the hardware just moves the problem rather than solving it. Using PLCs as an
> example again, imagine 10 relays controlled by ladder logic. If there's a software bug,
> you yell at the programmer. But without the software those relays don't *DO*
> anything. They just sit there. If you replicate the logic in the PLC by wiring the relays
> in a massively complicated way you will just as likely have bugs. And they will be
> MUCH harder to find.

Yes ...and hardware gets older over the years and this happens very fast in a harsch environment (heat, radiation a.s.o)

The good thing is that software can't be effected by radiation and heat. Another concept would be to minimize the number of hardware components and to use raw CPU power to implement the concept of virtual devices talking to simple IO interfaces. Multicore CPUs are supporting such a concept

Best Regards
Armin Steinhoff

---- snip--- see http://www.control.com/thread/1327707041#1328018928
James Ingraham said:
> Disagree. (a) Hardware call still be a problem. Note the 737 rudder issue. That was
> hardware. So was the Tacoma Narrows Bridge. (b) Moving the software complexity
> into the hardware just moves the problem rather than solving it. Using PLCs as an
> example again, imagine 10 relays controlled by ladder logic. If there's a software bug,
> you yell at the programmer. But without the software those relays don't *DO*
> anything. They just sit there. If you replicate the logic in the PLC by wiring the relays
> in a massively complicated way you will just as likely have bugs. And they will be
> MUCH harder to find.

Before software, plant automation ran just fine.

The problems in software multiply with complexity, which climbs ever higher. Is software really necessary? If control systems relied more on hardware and less on software, some of our reliability problems would simply vanish and time- and safety-critical applications improve. Wouldn't it be a good thing to surely and systematically decrease the use of and dependence upon software, in favor of the more reliable and testable hardware?

What is software, anyway? We can find that software selects, sequences, and times the hardware functions. But software itself is sequenced and timed by hardware. Software, in the final analysis, can only tell the hardware what to do, in what order and, in some cases, how long to do the activity. Software provides direction to the hardware, but hardware actually performs all the functions. Software can only tell the hardware what it is time to do:


001010 (now it is time to) Do X,
001012 (now it is time to) Do Y,
001014 (now it is time to) Do Z,

Hardware is the necessary and more robust physical layer that constitutes-and connects-subsystems. Software consists of fragile over-layers of human-composed command that generate their own problems. If the hardware for control systems was smarter inmatters temporal, it would not need as much software, or perhaps it would need none. If the hardware could be designed and configured to know when to do the functions and operations it already knows how to do, it would not need software to tell it when to do them. Shifting software functions to hardware would be key in reducing software dependency. Hardware is faster, surer, and more reliable than software. If the hardware did not need software, it could be more autonomous and simpler. Controller design efforts would be more focused and efficient as well.

Armin Steinhoff said:
> Yes ...and hardware gets older over the years and this happens very fast in a
> harsh environment (heat, radiation a.s.o)

Self-testing hot-swap hardware for that.

> The good thing is that software can't be effected by radiation and heat.
> Another concept would be to minimize the number of hardware components and to use
> raw CPU power to implement the concept of virtual devices talking to simple IO
> interfaces. Multicore CPUs are supporting such a concept

Amidst further complexity.

Best regards,
CharlieM

By James Ingraham on 31 January, 2012 - 2:34 pm

CharlesM: "Before software, plant automation ran just fine."

Selective memory. Yes, there were plenty of plants running before the invention of the microprocessor. They had their own share of problems. WAY back, a water wheel would turn outside, driving every single machine in the plant. If the belt broke, every machine went down. If you wanted to add a machine every machine went down. If you wanted to fix a machine every machine went down.

You didn't address my points about the 737 rudder malfunction or the Tacoma Narrows bridge. The 737 rudder problem took over a DECADE to find. If you specifically want a plant automation example, the Union Carbide Bhopal disaster had nothing to do with software.

CharlesM: "The problems in software multiply with complexity, which climbs ever higher."

Again, the problem is not software. The problem is the TASK. If you replicate my ladder logic with relays you will have EXACTLY the same complexity.

CharlesM: "Is software really necessary?"

Apparently. Most of the tasks I have programmed could be theoretically done with hardware alone, but it would be a royal pain, and you couldn't hit the cycle times. And what do you do when you change something? On my systems, if on Tuesday they run out of the correct box size and need to run a slightly bigger box, they just go change a couple values on the HMI and keep running. Do you really want to re-program an FPGA for that? Or spin a new ASIC?

I don't have any customers telling me 'Please give us less information out of your machine. Make sure it's all bang-bang automation rather than servo drives. And don't give me one of those new-fangled color screens; I want 137 flat black push-buttons arrayed in no particular order.'

CharlesM: "If control systems relied more on hardware and less on software, some of our reliability problems would simply vanish..."

My counter-argument remains the same. Yes, a relay is more reliable than software. Incorporate the functionality of the software into the relay and the complexity is the same, therefore has the same reliability problems.

-James Ingraham
Sage Automation, Inc.

1 out of 1 members thought this post was helpful...

>CharlesM: "Before software, plant
>automation ran just fine."
>
>Selective memory. Yes, there were
>plenty of plants running before the
>invention of the microprocessor. They
>had their own share of problems. WAY
>back, a water wheel would turn outside,
>driving every single machine in the
>plant. If the belt broke, every machine
>went down. If you wanted to add a
>machine every machine went down. If you
>wanted to fix a machine every machine
>went down.

Now, in addition to hardware faults, we have software faults.

>You didn't address my points about the
>737 rudder malfunction or the Tacoma
>Narrows bridge. The 737 rudder problem
>took over a DECADE to find. If you
>specifically want a plant automation
>example, the Union Carbide Bhopal
>disaster had nothing to do with
>software.

Sorry for not addressing your points:

The Bhopal disaster had nothing to do with electronic/electrical hardware, but was attributed to shoddy maintenance.

I am not familiar with the 737 rudder problem. Was it electronic logic hardware that was the problem?

The Tacoma Narrows, to my knowledge, was a case of unpredicted harmonic oscillation caused by wind-shear strumming of the suspension cables. Hardly an electronic hardware problem.

Software disasters:

"Computers are increasingly being introduced into safety-critical systems and, as a consequence, have been involved in accidents. Some of the most widely cited software-related accidents in safety-critical systems involved a computerized radiation therapy machine called the Therac-25. Between June 1985 and January 1987, six known accidents involved massive overdoses by the Therac-25 -- with resultant deaths and serious injuries. They have been described as the worst series of radiation accidents in the 35-year history of medical accelerators." [1]http://ei.cs.vt.edu/~cs3604/lib/Therac_25/Therac_1.html

"On February 25, 1991, during the Gulf War, an American Patriot Missile battery in Dharan, Saudi Arabia, failed to intercept an incoming Iraqi Scud missile. The Scud struck an American Army barracks and killed 28 soldiers. A report of the General Accounting office, GAO/IMTEC-92-26, entitled Patriot Missile Defense: Software Problem Led to System Failure at Dhahran, Saudi Arabia reported on the cause of the failure. ….. Ironically, the fact that the bad time calculation had been improved in some parts of the code, but not all, contributed to the problem, since it meant that the inaccuracies did not cancel." http://www.math.psu.edu/dna/455.f96/disasters.html

There have been many others.

>CharlesM: "The problems in software
>multiply with complexity, which climbs
>ever higher."
>
>Again, the problem is not software.
>The problem is the TASK. If you
>replicate my ladder logic with relays
>you will have EXACTLY the same
>complexity.

IMHO the input sensors and output contactor or triac or thyratron hardware has to be there anyway. The software is yet another complicating set of factors, including fetch-and-retrieve and instruction decode delay times on top of any hardware delays.

>CharlesM: "Is software really
>necessary?"
>
>Apparently. Most of the tasks I have
>programmed could be theoretically done
>with hardware alone, but it would be a
>royal pain, and you couldn't hit the
>cycle times. And what do you do when
>you change something? On my systems, if
>on Tuesday they run out of the correct
>box size and need to run a slightly
>bigger box, they just go change a couple
>values on the HMI and keep running. Do
>you really want to re-program an FPGA
>for that? Or spin a new ASIC?

Process flexibility can, in most cases be designed-in to be dial-able or switch-selectable.

>I don't have any customers telling me
>'Please give us less information out of
>your machine. Make sure it's all
>bang-bang automation rather than servo
>drives. And don't give me one of those
>new-fangled color screens; I want 137
>flat black push-buttons arrayed in no
>particular order.'
>
>CharlesM: "If control systems relied
>more on hardware and less on software,
>some of our reliability problems would
>simply vanish..."
>
>My counter-argument remains the same.
>Yes, a relay is more reliable than
>software. Incorporate the functionality
>of the software into the relay and the
>complexity is the same, therefore has
>the same reliability problems.

Software requires a TM and turning all information into data, voluminous amounts of which must be minded and stored or thrown away.
Data retrieved, even if correct is subject to improper interpretation, as well.

Simpler is better.

Best regards,
CharlieM

By James Ingraham on 4 February, 2012 - 2:09 pm

CharlieM: "The Bhopal disaster had nothing to do with electronic/electrical hardware, but was attributed to shoddy maintenance."

True, it was not electronic hardware that failed, but PHYSICAL hardware. Which means that Bhopal would have happened regardless of which way you controlled it.

CharlieM: "I am not familiar with the 737 rudder problem. Was it electronic logic hardware that was the problem?"

Again, mechanical in nature. The hydraulic system could get stuck at a point where the fluid went the OPPOSITE way from what the pilot intended, so the rudder would go left when they meant right. The pilot would of course react by going more to the right... which meant the rudder would go more to the left, and eventually the plane would get sideways and flip over and then crash.

CharlieM: "The Tacoma Narrows, to my knowledge, was a case of unpredicted harmonic oscillation caused by wind-shear strumming of the suspension cables. Hardly an electronic hardware problem."

Not electronic hardware, no, but hardware nonetheless. (Also, it was an entirely predictable harmonic oscillation, even then. They didn't predict it, but they SHOULD have.) I brought these examples up to counter your "hardware can be tested to perfection" argument. No, it can't.

CharlieM: "Software disasters..."

Well aware of the Therac-25 debacle. I've even posted it about a few times here on control.com. And I know about the Patriot Missile problem. That one is particularly galling, because they KNEW about the issue and simply put in the manual to reboot the system every few hours. Don't get me wrong, I'm not saying software is a panacea. But I don't see how you'd have solved those two problems with hardware.

The only well-known purely electronic hardware problem I can think of is the Pentium floating-point bug.

CharlieM: "IMHO the input sensors and output contactor or triac or thyratron hardware has to be there anyway. The software is yet another complicating set of factors..."

Okay, but how do you get the contactor or triac or whatever to do what it's supposed to at the right time? SOMETHING has to let it know.

CharlieM: "Software requires a TM and turning all information into data, voluminous amounts of which must be minded and stored or thrown away."

Okay. So what do I do instead? How am I going to get my dual 7-axis articulated arm robot to pour the right drink at the right time?
http://www.youtube.com/watch?v=X-tB3ZYK2j0

-James Ingraham
Sage Automation, Inc.

By William Sturm on 31 January, 2012 - 3:09 pm

< Charlie said :: "If the hardware did not need software, it could be more autonomous and simpler." >

While I totally agree that simpler systems are exponentially more reliable, I do not agree that hardware will be simpler without software. I think that hardware would need to be added to do the logic that was previously done in software. Then all you have done is change the programming techniques from software to hardware.

Hardware is much less likely to change during operation, that I could agree with. I also think that software is frequently too complex. Since it is "soft", extra features frequently get added. There is a Peter Principle for software, I suppose. Will that not happen with hardware design?

Bill Sturm

<< Charlie said :: "If the hardware did not need software, it could be more autonomous and simpler.">>

> William Sturm said: While I totally agree that simpler systems are exponentially more reliable,

OK .. then let's do distributed processing by a network of small devices.

> William Sturm said: I do not agree that hardware will be simpler without software. I think that hardware would need to be added to do the logic that was previously done in software. Then all you have done is change the programming techniques from software to hardware. >

There is no hardware without software ... but there is always to decide which logic should be done in hardware and what in software (with a MCU).

> William Sturm said: Hardware is much less likely to change during operation, that I could agree with. <

FPGAs can be incrementally updated ...

> William Sturm said: I also think that software is frequently too complex. Since it is "soft", extra features frequently get added. There is a Peter Principle for software, I suppose. Will that not happen with hardware design? <

Yes ... hardware design is more and more based on software which programs programmable hardware.

The only difference between hardware and software is how many parallel operations can be done physically.

Armin Steinhoff

> Before software, plant automation ran just fine.
>
> The problems in software multiply with complexity, which climbs ever higher. Is software really necessary?

Yes ... it is. Software is just represented by a special status of a piece configurable hardware ... it's called memory.
At the end all of the operations are done in hardware.

IMHO ... the complexity of industrial applications are so high that its impossible to do it in plain hardware logic.

Best Regards
Armin Steinhoff

By pvbrowser on 8 June, 2013 - 4:50 am

> And most decry the procedural programming language stepping stone between mind and hardware. Assuming we
> don't veer off into massively parallel nanocomputers or chemical computing, I think the next step would be for our
> thinking machines to replicate a system from a description to produce an ASIC with logic built in or something on that
> order. Who will champion that? IP houses like ARM or the silicon giants.

I think the quantity of devices that are produced of a special ASIC type is much less than the quantity of ARM SoC devices.
Thus ARM solutions should be much cheaper.

My question is "What about hard realtime with ARM systems?"
For example, is it possible to assigning a given core exclusively for realtime tasks?

Where everything else will run on other cores.

If the realtime tasks have a bad "nice value" and you are able to measure or calculate a worst case cycle time of your realtime tasks, could we call it a realtime system?

> My question is "What about hard realtime with ARM systems?"
> For example, is it possible to assigning a given core exclusively for
> realtime tasks?

> Where everything else will run on other cores.

Probably not exactly what you are writing about, but an engineer I know has been spending time getting the inexpensive BeagleBone (ARM) to do linux-cnc (realtime) stuff. Here is a blog he started recently about this:
http://bb-lcnc.blogspot.com/

By Charles Moeller on 8 June, 2013 - 5:22 pm

Even a single core dedicated to "realtime" tasks can't really run in real time. It must run in machine op-cycles, where sampling, storing, fetching, comparing, etc. interrupts the "realtime" tasks. This TM-type computing in a von Neumann architecture, mediated by software is just not good enough for true realtime operation.

What is needed is a realtime portion of the system that is truly parallel-concurrent, with no instructions and no clock so it can concentrate its attention and dedicate its operation to stimulus-response in continuous, uninterrupted time and respond in immediate fashion. Thiis is how the supervisory or management portion of automated machine operation should be done.

>> My question is "What about hard realtime with ARM systems?"
>> For example, is it possible to assigning a given core exclusively for
>> realtime tasks?
>
>> Where everything else will run on other cores.

By Charles Moeller on 8 June, 2013 - 2:06 pm

pvbrowser:

ASIC requires massive chip volumes to be practical.

ARM is a set of "canned" solutions that already have a large volume, if you can find one that fits your application, but they use TM-types of operations and require large numbers of gates.

Hard realtime is not possible in a computing device.

In my opinion, a "real-time operating system" is a contradiction of terms, in that if an OS is relied upon to perform all, including time-critical process functions, then it is not a real-time system. This is based upon the observation that if one stops, in a real-time process, to look up or to fetch anything, or to perform any other function, one is truly out of real time. If the machine operation is linear-sequential, by my definition, it therefore can't be real time.

The only justification I can think of to call any OS an "RTOS" is that those particular systems may allow one to be as close as one may get to true real time while using Boolean-sequential logic (which, in truth, is not very close to true real time at all). That "everyone uses RTOSs" does not reverse the fact that they are very poor substitutes for true real-time operation. The universal factory automation that was eminent forty years ago has not yet happened, although the available computing power has since increased more than ten million-fold. It is still more cost-effective to manufacture most products semi-automatically. Because this is the actual case (just look around in industry) there must be something lacking in the current automation technology.

My definition: true real time is immediate response to the stimulus without having to perform any other function first. My parallel-concurrent PTQ system is, in that sense, true real time and produces a final hardware design from the process (being controlled) description.

By pvbrowser on 9 June, 2013 - 2:56 am

In response to Charles Moeller

According to my knowledge "realtime" means, that you can ensure a maximum time between a input and the according reaction.
Thus it is allowed that you need some instructions for fetching the input, do some calculations and output the result (same as every PLC does).

This can be achieved by a simple loop on dedicated hardware, where you can guarantee a maximum cycle time.
It could also be achived by interrupt driven I/O where you know the number of instructions needed for the ISR.

On a von Neumann machine you would have to know the maximum amount of time needed for scheduling and interrupt driven I/O. The userspace applications could not harm "realtime" because they can run on a lower priority as the realtime tasks.

When we talk about ARM / SoC (credit card sized) computers we will have a well defined set of hardware. We could run the emdedded hardware headless without booting up to the GUI, thus making the output more predictable. What we will have to take into account will be network I/O and USB traffic.

Eventually we are not able to calculate the according maximum response time, but we are able to do measurements. If we measure with heavy load and also watch the CPU usage, we will be able to ensure our maximum response and/or cycle time.

If we have multiple cores and we use a dedicated core for the realtime tasks i think get a solution which can provide even hard realtime.
(realtime does not necessarily mean fast but predictable according to a maximum response time)

This is not in conflict with the fact that you can get even faster reaction times on a ASIC.

But the CPU solution is much more flexible and much cheaper (provided the bandwith of your process is limited in bandwidth). You know according to Fourier you must have at least 2 samples within the period of the highest input frequency, this is sufficient

By Steinhoff on 9 June, 2013 - 1:16 pm

> According to my knowledge "realtime" means, that you can ensure a maximum time between a input and the according reaction.

real-time means delivering the correct data at the right time ( or deadline). How fast the reaction must be depends on the whole real-time system.

In a slow real-time process it can be minutes ... a fast real-time process e.g. needs a reaction time in the range of micro or milliseconds.

But in both cases we have a real-time system ...

Regards
Armin Steinhoff

pvbrowser:

Yes, "realtime" has been defined as something like "being ready before needed."

Unfortunately, there are no guarantees for this at present in current technology, and no way to guarantee such a thing. The linear-sequential system will work only for as long as all events occur according to plan and there are no component failures or decision faults. The interrupts you mentioned are troublesome, because while being serviced, they suspend normal operation and cause distress to the idea of "realtime."

The Nyquist–Shannon sampling theorem does not work for transients or spurious signals.

My PTQ system replaces linear-sequential algorithms with reactive circuits that are real-time and parallel-concurrent and which respond immediately rather than after fetch and decode, search and match, instruction, clock, interrupt, or executive loop times.

IMHO ... at next will be more and more software :) We will see more and more programmable hardware by huge and faster FPGAs.

New hardware description languages will allow us by the usage of automated verification to develop more reliable soft-/hardware.

This will bring real parallel processing to the automation systems. For instance the nodes of the VARAN fieldbus is based on reprogrammable FPGAs and there are already code generators converting IEC 61499 code to VHDL code .... software is the future :)

Best Regards
Armin Steinhoff

PS: VARAN bus -> www.varan-bus.net

Armin Steinhoff:

I can appreciate your viewpoint, as more and more software seems to be happening.

But consider that every action that is actually performed in the controlled environment is performed by hardware. Every condition and event sensed in the real environment is generated by hardware. Why do we convert everything to software in the middle?

Instead of sampling and storing, then data-processing to determine the response, why don't we stay in the hardware mode and create simple stimulus-response mechanisms that react reliably and correctly. Control systems would be faster, safer, and less costly.

cmoel888@aol.com

@Charles Moeller,

if a FPGA (or CPLD) is the processing unit all processing is done in the hardware and there no external memory for program code. The software defines only the individual configuration of this hardware.
That means the result of the compilation of a piece VHDL software is a special configuration of a piece of hardware (FPGA, CPLD ..)

Best Regards
Armin Steinhoff

Armin Steinhoff:

> if a FPGA (or CPLD) is the processing unit all processing is done in the
> hardware and there no external memory for program code. The software defines
> only the individual configuration of this hardware.

> That means the result of the compilation of a piece VHDL software is
> a special configuration of a piece of hardware (FPGA, CPLD ..)

Yes. That's my point. If a piece of hardware (CPLD, FPGA, etc.) is directly connected (I&O) to the process it controls and no sample-and-store is taking place, and no instructions are accessed or executed (i.e., no run-time software), (and perhaps uses no clock) it wins my approval!

Regards,
CharlieM

I'm with Bill, I'm not going to reply to this one...

For simple logical operations any modern day ARM or Atom CPU will rip the logic to pieces for [potentially] less than a few watts of power, in tens or hundreds of microseconds. So why are we having this discussion? For complex motion control I can see FPGAs or distributed computation, but this has already been done for decades.

Oh well, so much for not replying.... :o)

KEJR

> Why do we convert everything to software in the middle?

> Instead of sampling and storing, then data-processing to determine the
> response, why don't we stay in the hardware mode and create simple
> stimulus-response mechanisms that react reliably and correctly. Control systems
> would be faster, safer, and less costly.

At some level of complexity, I think the distinction becomes a semantic one. An FPGA configuration begins to look like firmware, perhaps akin to microcode in a CPU. Does this make it more virtuous somehow, more robust? I've seen bugs arise in FPGAs, just as troublesome (and sometimes harder to locate) than those in software.

It is true that FPGAs are becoming much more capable, but still, software tends to be more scalable. The scalability of FPGAs are bounded by their gate levels - they're great right up to the point where they're not, then they fall off the cliff. So, for fixed applications in the internals of products they tend to be a useful tool, whereas for user-programmable controllers intended for a wide variety of applications, the inconsequential cost of high performance CPUs (relative to the other costs of an automation application) means that software will continue to be a preferred approach for a long time.

However, the specific language through which a system is programmed is quite another matter. The virtue of Relay Logic was that it closely replicated the control tools of the day -- electromechanical relays. Unfortunately, that day was back in the 1960s. Our industry moves s-l-o-w-l-y...

Ken Crater
Founder, Control.com
ken@control.com

Ken:

>> Why do we convert everything to software in the middle?

>> Instead of sampling and storing, then data-processing to determine the
>> response, why don't we stay in the hardware mode and create simple
>> stimulus-response mechanisms that react reliably and correctly. Control
>> systems would be faster, safer, and less costly.

>At some level of complexity, I think the distinction becomes a semantic one.
> An FPGA configuration begins to look like firmware, perhaps akin to microcode
> in a CPU. Does this make it more virtuous somehow, more robust? I've
> seen bugs arise in FPGAs, just as troublesome (and sometimes harder to
> locate) than those in software.

Simpler is better.

>It is true that FPGAs are becoming much more capable, but still, software tends
> to be more scalable. The scalability of FPGAs are bounded by their gate levels -
> they're great right up to the point where they're not, then they fall off
> the cliff. So, for fixed applications in the internals of products they tend
> to be a useful tool, whereas for user-programmable controllers intended
> for a wide variety of applications, the inconsequential cost of high performance
> CPUs (relative to the other costs of an automation application) means that
> software will continue to be a preferred approach for a long time.

I can agree with most of that. Higher levels of programmability need the CPUs and peripherals.

The applications I am thinking of are the toasters, home security, vehicle subsystems, factory automation, etc. These types of applications employ 98% of the microprocessors. Rather than use a $.25 device that can call up your aunt Jane, why not use a bare-bones hardware device that costs just a few cents?

>However, the specific language through which a system is programmed is quite
> another matter. The virtue of Relay Logic was that it closely replicated the
> control tools of the day -- electromechanical relays.

That kind of simplicity and straightforwardness can be available today.

Regards,
CharlieM

By James Ingraham on 4 February, 2012 - 1:50 pm

CharlieM: "Simpler is better."

I think we'd all agree with that... up to a point. It can be difficult to define "simpler." For example, imagine a large machine (packaging, web press, doesn't matter) with a mechanical drive train. It some ways this is very simple. Turn the crank, machine moves. On the other hand there are lots of problems with this setup. The designer had to be very careful with his gear sizes and (mechanical) power requirements. He had to put a very robust transmission system, since the torque for the entire machine will transfer through it. Now consider multiple servo drives using electronic line shafting. In some ways this is more complicated. Servos have to be sized, wired, configured, tuned, etc. But by getting rid of that line shaft the machine becomes much more modular, and one part of the machine no longer directly influences the rest of it. Problems are easier to spot because they are narrowed down. If the line shaft jams you have to check the entire machine to find the jam. If a servo jams it puts an alarm up on an HMI and you walk right to it. So which is simpler?

CharlieM: "The applications I am thinking of are the toasters, home security, vehicle subsystems, factory automation, etc."

Ah, now I see the problem. We've been talking at cross-purposes because you equate factory automation with toasters. I suppose there are a handful of tasks that are as simpler as toasters; a simple zero-pressure chain driven live roller accumulation conveyor, for example. Guess what? Most zero pressure CDLR accumulation conveyors ARE done in hardware. This is the tiniest fraction of factory automation. A high-speed sortation conveyor would be a nightmare to do in hardware. And your earlier point that we had factories before we had software is correct, but we did not have 6-axis articulated arm robots.

I actually had to take a few deep breaths when I read "toasters" and "factory automation" in the same sentence. I don't think you actually meant to insult my life's work, or imply that the software I write has the difficulty level of programming a toaster. Nevertheless, you clearly don't have a good picture of what's hard vs easy in software. I'm glad that several other people have essentially agreed with my point that moving the complexity from the software to the hardware doesn't make the system more reliable.

-James Ingraham
Sage Automation, Inc.

P.S. I actually don't think programming a toaster is as easy as it sounds, either. Your point about the complexity of software is valid, and when making a toaster you have a lot of considerations to take in to account. Not least is price; when I do a million dollar automation job I can throw in processors where ever I want. If you're trying to make money off $20 toasters you have to REALLY work at getting the cost out.

By Charles Moeller on 5 February, 2012 - 2:23 pm

James:

>> CharlieM: "Simpler is better."

> I think we'd all agree with that... up to a point. It can be difficult to
> define "simpler." For example, imagine a large machine (packaging, web press,
> doesn't matter) with a mechanical drive train. It some ways this is very simple.

---- snip ----
Moderator's note: See thread http://www.control.com/thread/1327707041#1328381434 for complete post


> CharlieM: "The applications I am thinking of are the toasters, home
> security, vehicle subsystems, factory automation, etc."

> Ah, now I see the problem. We've been talking at cross-purposes because you
> equate factory automation with toasters. I suppose there are a handful of tasks
> that are as simpler as toasters; a simple zero-pressure chain driven live
> roller accumulation conveyor, for >example.

---- snip ----

Your points above are valid and well-made. But complicated high-end factory machines and manufacturing processes originated in simpler times and means. Also, for every 100-yard long, 120-shaft paper-making machine, there exist hundreds of much simpler machines and potential automation projects. Linear-sequential software was complex at the outset and has grown at least in proportion to the machines it serves at that upper end. Perhaps the Peter Principle now applies.

Somewhere along the way to modern times, technology skipped over a nice alternative to linear-sequential data-processing: that of real time parallel-concurrent operation.

What I'm proposing is a parallel-concurrent method of process specification that can be implemented in parallel-concurrent hardware now on the simpler applications such as toasters, low-end assembly and fabrication machines, automotive braking systems, etc. As time goes on, this simpler, safer, and less complex method will grow up to encompass all the tools and processes of the day.

Best regards,
CharlieM

Hello Ken,

>> Why do we convert everything to software in the middle?

>> Instead of sampling and storing, then data-processing to determine the
>> response, why don't we stay in the hardware mode and create simple
>> stimulus-response mechanisms that react reliably and correctly. Control systems
>> would be faster, safer, and less costly.
> At some level of complexity, I think the distinction becomes a semantic one.
> An FPGA configuration begins to look like firmware, perhaps akin to microcode
> in a CPU. Does this make it more virtuous somehow, more robust?

Yes ... e.g. a FPGA can't change its configuration by it self.

> I've seen bugs arise in FPGAs, just as troublesome (and sometimes harder to locate)
> than those in software.

The roots of troubles with FPGAs are software based and here must the bugs fixed. It is always possible that the synthesizing process (compiler) has a bug. But there are very good tools for the verification of the FPGA code. Comparable tools are not available in the pur software world.

> It is true that FPGAs are becoming much more capable, but still, software tends to be
> more scalable. The scalability of FPGAs are bounded by their gate levels - they're
> great right up to the point where they're not, then they fall off the cliff.

There is a lot of SoC realized with FPGAs. These SoCs are often based on a 32bit microcontroller core ( e.g. Microblaze), memory and plain standard FPGA logic. The combination of a microcontroller core and FPGA logic seems for me very scalable .

> So, for fixed applications in the internals of products they tend to be a useful tool,
> whereas for user-programmable controllers intended for a wide variety of applications,
> the inconsequential cost of high performance CPUs (relative to the other costs of an
> automation application) means that software will continue to be a preferred approach
> for a long time.

Yes ... I share this view. But the usage of high performance CPUs is introducing a lot of trouble by their multcore design. Until now we don't have programming languages which are describing real physical mutiprocessing at the level of VHDL.

Best Regards
Armin Steinhoff

> The tangled threads of linear-sequential operation tend to
> inhibit each other and may cause faulty operation.

The word/concept you're looking for is a called a semaphore.

http://en.wikipedia.org/wiki/Semaphore_(programming)

It's been around since at least the 1950's in the area of computer science.

Longer in other fields of communications.

http://flagexpressions.wordpress.com/2010/03/23/history-behind-semaphore-flags/

What's your point?

---- snip ----
>The word/concept you're looking for is
>a called a semaphore.

---- snip ----

>What's your point?

Yes, I've implemented the semaphore concept, as well.

My point is that most industrial and consumer appliances would be better served with a parallel-concurrent mode of operation, rather than Turing-type machines and data-processing with linear-sequential software.

Best regards,
CharlieM

By William Sturm on 1 February, 2012 - 12:03 pm

< Charles said: "My point is that most industrial and consumer appliances would be better served with a parallel-concurrent mode of operation, rather than Turing-type machines and data-processing with linear-sequential software. >

Now that I can agree with.  Hopefully you are planning to find ways to get us there...
 Bill Sturm

Bill:

>>Charles said: "My point is that most
>>industrial and consumer appliances would
>>be better served with a
>>parallel-concurrent mode of operation,
>>rather than Turing-type machines and
>>data-processing with linear-sequential
>>software. 

>Bill said: Now that I can agree with.  Hopefully
>you are planning to find ways to get us
>there...

'Been working on it for some time.

It turns out that the TMs with shared-resource hardware and linear-sequential software can't be improved very much no matter what you do. There are certain impediments (I have a list) that are inherent in that type of machine. The current trend of wider words and higher speed does nothing to cure the fundamental problems of the method, although more and more things can be done in a given unit of time, but at greater and greater expense.

I have looked at it afresh and done a bit of rethinking on the general problem of control from the ground-up. Upon first principles, we could say that:

IF the spatially-bound Turing machines using shared-resource hardware and linear-sequential software doesn't always result in the best control systems,

THEN perhaps spatio-temporal machines using dedicated parallel-concurrent hardware will provide a good alternative for some of the control systems.

The trouble with this approach: humans dislike change, especially after being strongly schooled in an accepted method.

Best regards,
CharlieM

By Vladimir E. Zyubin on 1 February, 2012 - 5:21 am

> We can do better with an alternate technology but if we could, who would be
> its champion?

All turn around a description and conceptual means. Descriptions are created by human beings for human beings. So, the implementation question (FPGA or RAM/ROM with a code) has no sense. Concepts rule. Concepts that must fit both specifics of the domain and human beings behaviours (limits) to process information.

The domain specifics are: event-driven nature, synchronism (necessity to work with timeouts, latencies, delays...), and concurrency (that reflects physical independency of processes on the controlled object).

The human limits are a matter of psychology, to be short -- necessity to structurise information (divide and rule) and the linear-sequential form.

So future (if it will be) is just to find domain specific language (aka 4th GL) for automation that allows programmer to reflect domain-specific aspects in structurised form with linear-sequential writing/reading

Something like that :-)

Vladimir:

>So future (if it will be) is just to
>find domain specific language (aka 4th
>GL) for automation that allows
>programmer to reflect domain-specific
>aspects in structurised form with
>linear-sequential writing/reading
>
>Something like that :-)

Yes! I read your interesting paper, Hyper-automation.

My view:
One of the major difficulties faced by software designers is although they live and work in three-dimensional space and multi-threaded time, they are constrained to create systems that reside completely within the space-domains of computers. These systems must sense and react to real world temporal effects. The designers therefore are required to repeatedly translate input information from time to space, and translate from space to time for relevant output. Any temporal operations in between must be performed through space-only transformations. It is no wonder these unfortunate software designers often make mistakes.

Agreed that a new language must be formulated to correct this.

Best regards,
CharlieM

By Vladimir E. Zyubin on 2 February, 2012 - 10:09 am

CharlieM> There are certain impediments (I have a list) that are inherent in that type of machine.

It would be very interesting to talk about. Can you show the list? AFAIU, TM was designed for calculation tasks ("the faster the better" principle). So, adding temporal features (I call it sinchronism) qualitatively changes the Turing model.

By Charles Moeller on 2 February, 2012 - 10:29 pm

>> CharlieM said: There are certain impediments (I have a list) that are inherent in
>> that type of machine.

> Vladimir said: It would be very interesting to talk about. Can you show the list? AFAIU, TM
> was designed for calculation tasks ("the faster the better" principle). So,
> adding temporal features (I call it sinchronism) qualitatively changes the
> Turing model.

The TM was specifically designed for decrypting encoded messages. Technically: to modify, by substitution algorithms, an encoded character string until intelligible words appeared.

The following are a few of the characteristics of computing technology that I consider to be impediments:

1. NO TEMPORAL LOGIC: There are no verbs, dynamic operators, or temporal logic in the fundamental computer logic that has now completely pervaded our daily lives.

2. SMALL NUMBER OF OPERATORS: The number of accepted fundamental operators and corresponding logic elements is small, limited to Boolean AND, NOT, and their combinations, and STORE, the memory operator. The Boolean operators in combination can describe or perform 16 different functions between and upon two operands. The set can also perform binary arithmetic. Imagine writing a six-page paper (or a process-control scheme) while limited to such a small number of letters, words, or concepts.

3. NO ON-GOING TIME: When performed by physical logic elements, the operations are considered to be executed in a null-time zone, as the evaluations are ready at the next live moment (usually at the next clock pulse or instruction), which is designed to occur after any contributing settling times or gate-delays have run to completion.

4. ALL OPERATIONS ARE IN THE SPACE-DOMAIN: All higher-level computer languages (i.e., in software) are ultimately decomposable to, hence built up from, combinations of the Boolean operations and STORE. In machine language, those operations are used to determine explicitly: a) the locations from which to acquire the numerical or conditional operands, b) what Boolean operations to perform, c) where to put the results, and d) what to do next. Every step must be predetermined.

5. SPACE-DOMAIN RESULTANTS: Boolean logic used in such a manner is static, is unobservant of change, and can be said to inhabit the space-domain. The time-domain is an untapped resource.

One of the major difficulties faced by software designers is although they live and work in three-dimensional space and multi-threaded time, they are constrained to create systems that reside completely within the space-domains of computers. These systems must sense and react to real world temporal effects. The designers therefore are required to repeatedly translate input information from time to space, and translate from space to time for relevant output. Any temporal operations in between must be performed through space-only transformations. It is no wonder these unfortunate software designers often make mistakes. (Please excuse the repetition of my theme.)

Best regards,
CharlieM

>> Vladimir said: It would be very interesting to talk about. Can you show the list? AFAIU, TM
>> was designed for calculation tasks ("the faster the better" principle). So,
>> adding temporal features (I call it sinchronism) qualitatively changes the
>> Turing model.

> CharlieM said: The TM was specifically designed for decrypting encoded messages. T

IMHO ... the Turing Machine is a mathemathical model defined by Alan Turing. It makes statements about decision problems and the theory of the computionallity. ( very problem is decidable if it is computional by a TM ....)

Based on that TM are derived classes of different TMs ... Neumann a.s.o.

> CharlieM said: echnically: to modify, by substitution algorithms, an encoded character string until intelligible words appeared.

> The following are a few of the characteristics of computing technology that I consider to be impediments:

> 1. NO TEMPORAL LOGIC: There are no verbs, dynamic operators, or temporal logic in the fundamental computer logic that has now completely pervaded our daily lives.

Why is it a impediments ? The temporal logic has a much higher
complexity as the boolean logic.

Best Regards
Armin Steinhoff

By Charles Moeller on 3 February, 2012 - 4:44 pm

Armin:

>>> Vladimir said: It would be very interesting to talk about. Can you show the list? AFAIU, TM
>>> was designed for calculation tasks ("the faster the better" principle). So,
>>> adding temporal features (I call it sinchronism) qualitatively changes the
>>> Turing model.

>> CharlieM said: The TM was >specifically designed for decrypting encoded messages.

Armin Steinhoff said:
> IMHO ... the Turing Machine is a mathemathical model defined by Alan
> Turing. It makes statements about decision problems and the theory of the
> computionallity. (every problem is decidable if it is computional by a TM....)

>Based on that TM are derived classes of different TMs ... Neumann a.s.o.

>> CharlieM said: Technically: to modify, by substitution algorithms, an encoded
>> character string until intelligible words appeared.

>> The following are a few of the characteristics of computing technology
>>that I consider to be impediments:

>> 1. NO TEMPORAL LOGIC: There are no verbs, dynamic operators, or temporal
>> logic in the fundamental computer logic that has now completely pervaded our
>> daily lives.

Armin Steinhoff said:
> Why is it an impediment? The temporal logic has a much higher
> complexity as the boolean logic.

The fact that there is no temporal logic native to the time-domain in the list of operations for computation has the effect of requiring any and all desired temporal operations to be carried out via the space-domain. This requires transformation from real-world inputs to computer internal space-domain locations (data in numbered or labeled memory spaces). Furthermore, the accepted temporal logics that are able to be used via computer programming (see J.F. Allen or Amir Pnueli) are subject to that same constraint. All of the so-called temporal logic systems can't be operated in the real time domain (as the events and conditions change) but must be performed on static space-domain data after conversion by sampling and storing.

Yes, the impediment is that real-world temporal data must be changed into static space-domain data before computer data-processing, and then the static results must be translated from the artificial space-domain to the real-world outputs to join ongoing real time.

The conversions to the space-domain and back to the time-domain after processing are complications that take time to do and may result in conversion errors, lost data, or improper interpretation during several of the many steps necessary. A much better scenario would be to be able use the sensor information in real time as it occurs.

Best regards,
CharlieM

> [clip]
> Armin Steinhoff said:
>> Why is it an impediment? The temporal logic has a much higher
>> complexity as the boolean logic.

> CharlieM said: The fact that there is no temporal logic native to the time-domain in the list of operations for computation has the effect of requiring any and all desired temporal operations to be carried out via the space-domain. This requires transformation from real-world inputs to computer internal space-domain locations (data in numbered or labeled memory spaces). Furthermore, the accepted temporal logics that are able to be used via computer programming (see J.F. Allen or Amir Pnueli) are subject to that same constraint. All of the so-called temporal logic systems can't be operated in the real time domain (as the events and conditions change) but must be performed on static space-domain data after conversion by sampling and storing. <

> Yes, the impediment is that real-world temporal data must be changed into static space-domain data before computer data-processing, and then the static results must be translated from the artificial space-domain to the real-world outputs to join ongoing real time. <

> The conversions to the space-domain and back to the time-domain after processing are complications that take time to do and may result in conversion errors, lost data, or improper interpretation during several of the many steps necessary. A much better scenario would be to be able use the sensor information in real time as it occurs. <

Yes ... and the best answer I could find today is here:
http://www.mnbtech.com/index.php?id=164

Software is the solution.

Best Regards
Armin Steinhoff

By Charles Moeller on 4 February, 2012 - 1:12 pm

Armin:

Charlie Said:>> The conversions to the space-domain
>>and back to the time-domain after
>>processing are complications that take
>>time to do and may result in conversion
>>errors, lost data, or improper
>>interpretation during several of the
>>many steps necessary. A much better
>>scenario would be to be able use the
>>sensor information in real time as it
>>occurs.

Armin Steinhoff said:
>Yes ... and the best answer I could
>find today is here:
>http://www.mnbtech.com/index.php?id=164
>
>Software is the solution.

Your reference: http://www.mnbtech.com/index.php?id=164 describes "Mixed Technologies," which is the addition of hardware solutions (FPGAs and Graphical Processing Units) to General Processing Units (common computer processing).

So, hardware is at least some of the solution.

There is a distinction between:
Computation: the modification of input character strings (data) to produce displayable information, and
Process Control: activities taken to ensure a process is predictable, stable, and consistently operates at the target level of performance with only normal variation.

Computation can be used for process control, but it is not necessarily the best use for that technology. Appropriate uses of computation are cryptography, weather- and topological-map generation and updating, and art authentication, although we use it for most any task.

My point is that there is a more appropriate technology for process control than computing--and it is mostly hardware.

Best regards,
CharlieM

By Vladimir E. Zyubin on 5 February, 2012 - 3:40 am

CharlieM > My point is that there is a more appropriate technology for process
> control than computing--and it is mostly hardware.

What about a set of physical relays? :-) History tells us relays can be used for process control... as hardware (physical relays) and as software (LD IEC 61131-3).

So, I believe first-order question is the question about appropriate formal description... Question "hardware or software" is a second-order question.

BTW, I think PC can not be described in terms of Turing Machine because of the timers at least.

By Charles Moeller on 5 February, 2012 - 1:00 pm

Vladimir:

CharlieM said:
>> My point is that there is a
>>more appropriate technology for process
>> control than computing--and it is
>>mostly hardware.

Vladimir wrote:
> What about a set of physical relays? :-) History tells us relays can be used
> for process control... as hardware (physical relays) and as software (LD IEC 61131-3).

Yes, relays and switches are hardware. The first nuclear power plant (A1W) I qualified on as instructor-operator was originally fitted with a Hagan boiler-level control system (air relays). We later converted to an electrical system that used magnetic-amplifiers (mag-amps).

> So, I believe first-order question is the question about appropriate formal
> description... Question "hardware or software" is a second-order question.

That is correct. A parallel-concurrent language is a necessary next step toward parallel-concurrent operation. If we are limited to computer languages as a means of expression and computer hardware as a means of implementation, all of which are time-shared and shared-resource, we will forever be limited to linear-sequential operation. Despite the many computer languages and variations on computer hardware and architecture to date, we still do not have the basis of true parallel-concurrent operation.

But do not worry, help is close at hand. I have made solving this question my life's work.

> BTW, I think PC can not be described in terms of Turing Machine because of the timers at least.

Time and timers can be converted to numbers in space, which is what we do all the time using TMs, sundials, our wall clocks, etc.

Best regards,
CharlieM

> [ clip]
> That is correct. A parallel-concurrent language is a necessary next step toward parallel-concurrent operation. If we are limited to computer languages as a means of expression and computer hardware as a means of implementation, all of which are time-shared and shared-resource, we will forever be limited to linear-sequential operation. Despite the many computer languages and variations on computer hardware and architecture to date, we still do not have the basis of true parallel-concurrent operation. <

Parallel-concurrent operation happens today on every standard PC. Every PC has a lot of peripheral controllers (e.g. the GPUs) which are working physically parallel and independent from the CPU. The old SMP systems and also the cores of multicore CPUs are working complete parallel.

If you add FPGA based PCI devices as peripheral devices it adds a lot of parallel operations to the PC.

Is this not a base for physical parallel-concurrent processing ?

Best Regards
Armin Steinhoff

By Charles Moeller on 6 February, 2012 - 2:39 pm

Armin:

---- snip ----
> Parallel-concurrent operation happens today on every standard PC.
---- snip ----

> If you add FPGA based PCI devices as peripheral devices it adds a lot of
> parallel operations to the PC.

> Is this not a base for physical parallel-concurrent processing ?

"Processing," as a very common term in use today, means linear-sequential operation under the timing of software on a Turing-type machine.

So, at best you may get linear-sequential-based parallel-concurrent processing.

Best regards,
CharlieM

Armin Steinhoff wrote:
>> Parallel-concurrent operation happens today on every standard PC.
>> ---- snip ----

>> If you add FPGA based PCI devices as peripheral devices it adds a lot of
>> parallel operations to the PC. Is this not a base for physical parallel-concurrent processing ?

Charles Moeller wrote:
> "Processing," as a very common term in use today, means linear-sequential operation under the timing
> of software on a Turing-type machine.

> So, at best you may get linear-sequential-based parallel-concurrent processing.

So you want something like a clockless analog data flow computer?

Best Regards
Armin Steinhoff

What is the problem you guys are trying to solve? Yeah, modern programming still dates back to Turing machines, so what? A battery powered ARM processor that runs your tablet is 10 times more powerful than you need for the majority of control tasks, even a majority of motion control tasks (which are becoming more and more distributed anyway).

I feel that this discussion is largely academic and doesn't really service the needs of the group, who are more interested in real problems such as the fact that the big PLC companies are slow to release anything innovative even in terms of Computer Science of 30 years ago. Or how about getting a real shared tag database between motion, HMI, and PLC without going to a bloated database/OPC monster? Or how about getting Vendor A to talk to Vendor B because you have to support machines built 25-30 years ago? Those are real problems, not whether the underlying mechanism runs on hardware, software, firmware, or Jello-ware.

KEJR

By Vladimir E. Zyubin on 6 February, 2012 - 3:38 am

> CharlieM said:
> A parallel-concurrent language is a necessary next step toward
> parallel-concurrent operation.

AFAIK, there are a lot of such languages from occam and Esterel to VHDL. LD, FBD, SFC (IEC 61131-3) have parallelism at some extent and level as well.

> Despite the many computer languages and variations on computer hardware and architecture to
> date, we still do not have the basis of true parallel-concurrent operation.

But why must we have basis of _true_ parallel-concurrent operation? I personally see no problem with "seemed like true" parallel-concurrent operation. More over there are well known problems with physical parallelism that can produce so called races. http://en.wikipedia.org/wiki/Race_condition

> But do not worry, help is close at hand. I have made solving this question
> my life's work.

It will be very interesting to read about.

>> Vladimir said:
>> BTW, I think PC can not be described in terms of Turing Machine because of the timers at least.

> CharlieM said:
> Time and timers can be converted to numbers in space, which is what we do
> all the time using TMs, sundials, our wall clocks, etc.

Mostly agreed. But question "What is time" is very complex to speak about. I prefer to speak in terms of time intervals or events: timers, sundials, clocks, etc. produce events that can be counted up. Yes, it can be presented as a number, but TM "as is" does not include such kind of numbers and does not assume any uncontrolled changes on its tape. Because it makes impossible to define the computational complexity of algorithm. The main question in theory of computation. TM is intended for calculations, but neither for a communication with an external environment nor with other TMs, nor thinking in terms of time intervals... delays, timeouts, latencies. And so on.

So, the question about TM is not simple. It looks like the model is not fit to think about control algorithms (about concurrency and time intervals).

By Charles Moeller on 6 February, 2012 - 2:19 pm

> Yes, it can be presented as a number, but TM "as is" does not
> include such kind of numbers and does not assume any uncontrolled changes on
> its tape. Because it makes impossible to define the computational complexity of
> algorithm. The main question in theory of computation. TM is intended for
> calculations, but neither for a communication with an external environment
> nor with other TMs, nor thinking in terms of time intervals... delays, timeouts, latencies. And so on.

> So, the question about TM is not simple. It looks like the model is not
> fit to think about control algorithms (about concurrency and time intervals).

Vladimir:

You have stated the problem! That is my point. It is precisely why I am investigating alternative methods of control. The TM, as a paradigm, falls short of being able to describe various evident aspects of time. The TM can only deal with linear-sequential frames containing static data. If we go to the logic upon which the computer is based, we can find that the only temporal notion it can accommodate is the concept of coincidence. If we go back to the ancient philosophers' logic, we find the same limitation on static time. Closer to modern times, the French philosopher Henri Bergson noted this limitation in his 1889 essay Time and Free Will, in which he asked of logic, "Where is the becoming?" More recently, Physicist Dr. Lee Smolin in The trouble with Physics asked in 2006, "How can we represent time without turning it into space?"

My work on this subject expands the foundations of logic (words of description and operators), the corresponding logic elements (hardware), and the application of these to simple process control situations.

Best regards,
CharlieM

By Vladimir E. Zyubin on 7 February, 2012 - 11:47 am

> CharlieM wrote: More recently, Physicist Dr. Lee Smolin in The
> trouble with Physics
asked in 2006, "How can we represent time without
> turning it into space?"

I can not be kept from the question:
What about "How can we represent space without turning it into time?" :-)

Lee Smolin is right when he try to point out a problem, but his question is not correct enough.
The situation with time and space is symmetric:
time -- space
changes -- objects
duration -- length

> My work on this subject expands the foundations of logic (words of description and operators), the
> corresponding logic elements (hardware), and the application of these to simple process control situations.

May it be told now? It would be interesting to look at. And it is very pleasant to communicate with the person with such broad knowledge in philosophy. And I suspect, the annotation, that sounds so intriguingly, may has really interesting basis.

By Charles Moeller on 7 February, 2012 - 5:05 pm

Vladimir:

CharlieM wrote:
>> More recently, Physicist Dr. Lee Smolin in The trouble with Physics asked in 2006,
>> "How can we represent time without turning it into space?"

---- snip ----

Vladimir E. Zyubin wrote:
> Lee Smolin is right when he try to point out a problem, but his question is
> not correct enough. The situation with time and space is symmetric:
>time -- space
>changes -- objects
>duration -- length

Time and space are not exchangeable, despite Minkowski and Einstein. The time domain is a special circumstance, as one can go perceptively forth and back in space, but not in time. The dimension of time in common usage is also (as well as space) defined as smooth, infinitely dense, and infinitely extensible. In the constructed universe mediated by numbers and arithmetic, the properties of time are generally taken to be the same as, and in fact are mapped onto, a fourth spatial dimension having the general character of extension, or length. We use a counting mechanism to translate time-ticks into the space domain, as can be seen on the faces of our clocks. This practice fits nicely into our arithmetic computers, but it adulterates and obscures the true character of the time domain. Temporal logic in computers relies heavily on numbers and fixed conditions and-wonder of wonders-takes place wholly in the space-domain.

Best regards,
CharlieM

By Vladimir E. Zyubin on 8 February, 2012 - 12:08 am

CharlieM wrote:
>>> More recently, Physicist Dr. Lee Smolin asked, "How can we represent time without turning it into space?"

Vladimir E. Zyubin wrote:
>> The situation with time and space is symmetric.

CharlieM wrote:
>Time and space are not exchangeable, despite Minkowski and Einstein.

…and despite Lee Smolin.

I must confess, most physicists do not understand both time and space. Lee Smolin, according his question, just one of them. IMHO. With the best of my regards to Minkowski, Einstein, Lee Smolin and others.

CharlieM wrote:
> The time domain is a special circumstance, as one can go perceptively forth and back in
> space, but not in time.

The "Space" concept is not simplier than time.
Space can be non-euclidean, for example.

> The dimension of time in common usage is also (as well as space) defined as smooth, infinitely
> dense, and infinitely extensible. In the constructed universe mediated by numbers
> and arithmetic, the properties of time are generally taken to be the same as,
> and in fact are mapped onto, a fourth spatial dimension having the general
> character of extension, or length. We use a counting mechanism to translate
> time-ticks into the space domain, as can be seen on the faces of our clocks.

As to me, clocks are just changing objects, time and space just forms of its existence. Clocks have no sense without changes... or events. And what we have is called just "metrological routines"... that need both space and time :-)

Ok. :-x

The operation of threads via computer, in serial, parallel, or time-shared fashion however, is different from doing any or all of the activities "while" they are all active in configuration-bound hardware. For example, a certain 16-bit adder may be clocked up to 40 MHz, but uses less power and results are available more rapidly if operated in a flow-through manner, performing operations as the operands are presented. Efficiency and speed are gained by local control in hardware over centralized control via software.

It sounds like low-level implementation question, that can save a couple of Watts. Kinda changeable frequency of the quartzes. I can imagine which way that excellent idea can be used in embedded systems, but I can not see much sense to use it for objects that demand tens and hundreds of kiloWatts.

Best regards, Vladimir E. Zyubin

For the complete message this is responding to see http://www.control.com/thread/1327707041#1328379156

> Armin Steinhoff said:
>> Yes ... and the best answer I could find today is here:
>> http://www.mnbtech.com/index.php?id=164
>>
>> Software is the solution.

> Charlie Said: Your reference: http://www.mnbtech.com/index.php?id=164 describes "Mixed Technologies,"
> which is the addition of hardware solutions (FPGAs and Graphical Processing Units) to General Processing Units (common computer processing).

> So, hardware is at least some of the solution.

Programmable hardware is at least the solution.

> Charlie Said: There is a distinction between:
> Computation: the modification of input character strings (data) to produce displayable information, and
> Process Control: activities taken to ensure a process is predictable, stable, and
> consistently operates at the target level of performance with only normal variation.

From the view of computation there are no differences.

> Charlie Said: Computation can be used for process control, but it is not necessarily the best use for
> that technology. Appropriate uses of computation are cryptography, weather- and topological-map generation
> and updating, and art authentication, although we use it for most any task.

The use of mixed technology is the issue ... that means you can include the right hardware for your individual problem.

Best Regards
Armin Steinhoff

By Charles Moeller on 5 February, 2012 - 2:41 pm

Armin:
---- snip ----

> The use of mixed technology is the issue ... that means you can include the
> right hardware for your individual problem.

The right hardware can do the tasks required in a parallel-concurrent mode, but if you insist upon run-time software as we know it then you are necessarily going to have linear-sequential operation as a result.

Best regards,
CharlieM

> Armin wrote:
>> The use of mixed technology is the issue ... that means you can include the
>> right hardware for your individual problem.

CharlieM wrote:
> The right hardware can do the tasks required in a parallel-concurrent mode,
> but if you insist upon run-time software as we know it then you are necessarily going to have linear-sequential operation as a result.

Sorry that's not the case if the computing hardware nodes (w/o a CPU) are programmable FPGAs e.g.

Best Regards
Armin Steinhoff

Charles, for many years process control was supplied by pure hardware solutions first using clever pneumatic computational elements, then by analog electronics. The industry migrated to digital circuitry in an effort to reduce cost, AND to gain functionality that was difficult, expensive, or impossible to do with hardware alone. In real process control systems, we do computations that are well beyond FPGA capability.

Yes, we can build hardware with great functionality, but software brings it to life. I don't know why you do not believe in software, but it is a retrograde attitude, and not productive. Software has enabled up to do control systems using elements such as model predictive control and dynamic matrix control that would be uneconomical if implemented in hardware alone, just to quote two examples.

The optimal control system can no longer be implemented without a very significant amount of software - although some suppliers insist on calling it "firmware." There is no "After Software," in my opinion. More likely - better software.

Dick Caro, CEO, CMC Associates
Certified Automation Professional (ISA)
Buy my books at the ISA Bookstore:
Wireless Networks for Industrial Automation
Automation Network Selection
Consumers Guide to Fieldbus Network Equipment for Process Control

By Curt Wuollet on 5 February, 2012 - 12:44 pm

Or at least more of it.

cww

Dick Caro wrote:
> The optimal control system can no longer be implemented without a very significant amount of software - although some suppliers insist on calling it "firmware." There is no "After Software," in my opinion. More likely - better software. <

By Charles Moeller on 7 February, 2012 - 9:21 pm

> Charles, for many years process control was supplied by pure hardware solutions

---- snip ----

> The optimal control system can no longer be implemented without a very
> significant amount of software - although some suppliers insist on
> calling it "firmware." There is no "After Software," in my opinion. More likely - better software.

Dick Caro:

All control problems do not require a computer, but that's what is being used in most cases, simply because there is no easier or well-documented alternative. Sometimes a microprocessor is overkill or not appropriate. But in the cases when you absolutely need computing, then you necessarily must have software. I agree that computation has allowed us to get to our present technical position, but it is only one means of solving control problems. There are other means, one of which I have been addressing.

Software provides direction to the hardware, but hardware actually performs all the functions. Software can only tell the hardware what it is time to do. Now if there was a way to build into the hardware a sense of time so that it would automatically do what it is time to do, we would need less software, or in some cases, none. That would be safer, ease the class of faults attributable to software, and cost less to build and maintain.

We have had decades of better software and we still get reports like The Standish Chaos Report [a] that shows a (currently) backward trend in software projects' success rates:
32% Successful (On Time, On Budget, Fully Functional)
44% Challenged (Late, Over Budget, and/or Less than Promised Functionality)
24% Failed (Canceled or never used)
a. http://www.galorath.com/wp/2009-standish-chaos-report-software-going-downhill.php

Other reports and articles indicate software production efficiency is only about 50% for the successful 32% of projects. That is because as much time is spent on correcting software as is spent on its creation. Production efficiency is even less for the "challenged." The "failed" category gets a zero, of course.

My aim is to improve control engineering and the control engineer's lot.

Best regards,
CharlieM

By Curt Wuollet on 7 February, 2012 - 10:13 pm

That argument ignores the reality of economics.

A different piece of hardware for each application is not feasible in comparison to a piece of hardware that can do almost anything and software to tailor it to the task. There are a lot more people who can do the software than those who could do the hardware. And it takes serious investment and capital to make the hardware, much more than it takes to make the software. In fact, the toolchain I use is free and runs on top of a Free os. It is more economic to use commodity generic hardware and program it for even trivial tasks. It may not be to your liking or even mine, and may not be esoterically correct, but the customer simply doesn't care and they pay me. I suspect that will prevail for quite some time unless they come up with a practical silicon compiler and foundry at very low cost.

Regards
cww

By James Ingraham on 8 February, 2012 - 11:31 am

cww: "And it takes serious investment and capital to make the hardware, much more than it takes to make the software... unless they come up with a practical silicon compiler and foundry at very low cost."

I agree completely... well, almost completely. We're well on our way to low cost / free hardware design tools (e.g. SystemC). And I've been hearing about circuits printed by ink-jet for at least a decade. So it may indeed be possible one day to create hardware with no more difficulty or expense than current software development.

Of course, it won't be any more reliable than current software, either. If you write software the traditional way and "compile" it to hardware you'll still have the exact same propensity for bugs. And somehow I doubt that ink-jet printed circuits are going to be up to the standards that the big automation companies have for making hardware robust and reliable.

-James Ingraham
Sage Automation, Inc.

By Charles Moeller on 9 February, 2012 - 10:47 pm

Moderator's note: This message originally quoted the entire message in post http://www.control.com/thread/1327707041#1328718709. Rather than reproduce it here, please refer to it.

James:

We can open up the world of low-level applications to automation if we eliminate the requirement and ills of software. The hardware has to be there anyway.

Best regards,
CharlieM

James:

> We're well on our way to low cost / free hardware design tools
> (e.g. SystemC). And I've been hearing about circuits printed by ink-jet for at
> least a decade. So it may indeed be possible one day to create hardware with
> no more difficulty or expense than current software development.

> Of course, it won't be any more reliable than current software, either.
> If you write software the traditional way and "compile" it to hardware you'll
> still have the exact same propensity for bugs. And somehow I doubt that ink-jet
> printed circuits are going to be up to the standards that the big automation
> companies have for making hardware robust and reliable.

The biggest problem designers have with control software is the requirement to repetitively translate to and from the space domain.

Logic, software, and computational prowess are concentrated almost wholly in the space-domain. There is no temporal logic that is native to the time-domain. Data is taken from the space- and time-domains (or space-time domain) of the real world via sampling and selection and filed in memory (space-domain). A list of instructions (space-domain) is applied to the data and it is processed by means of Boolean logic operators (all space-domain transformations that can be laid out in row- and column-truth-tables). All reference values, if any, are discrete.

One of the major difficulties faced by software designers is although they live and work in three-dimensional space and multi-threaded time, they are constrained to create systems that reside completely within the space-domains of computers. These systems must sense and react to real world temporal effects. The designers therefore are required to repeatedly translate input information from time to space, and translate from space to time for relevant output. Any temporal operations in between must be performed through space-only transformations. It is no wonder these unfortunate software designers often make mistakes.

Best regards,
CharlieM

James Ingraham wrote:
>> If you write software the traditional way and "compile" it to hardware you'll
>> still have the exact same propensity for bugs. And somehow I doubt that ink-jet
>> printed circuits are going to be up to the standards that the big automation
>> companies have for making hardware robust and reliable.

Charles Moeller wrote:
> The biggest problem designers have with control software is the requirement to
> repetitively translate to and from the space domain.

How do you define the space-domain and the time-domain ?

In my understanding is a space-domain defined by a "space" which contains a set of different system states. It has nothing to do with a memory or a list of instructions. A time-space is a "space" which provides a set of events. The temporal logic defines e.g. the relationship between state changes and events, IMHO.

The understanding of physicists of the time-domain and the space domain is completely different ....

> Logic, software, and computational prowess are concentrated almost wholly in the space-domain.

All of the different temporal logics are not based on logic? Processing of data at an individual system state creates data changes ... that means events!

> There is no temporal logic that is native to the time-domain. Data is taken from
> the space- and time-domains (or space-time domain)

... what is a space-time domain??

> of the real world via sampling and selection and filed in memory (space-domain). A
> list of instructions (space-domain) is applied to the data and it is processed by means of
> Boolean logic operators (all space-domain transformations that can be laid out in row- and column-truth-tables).

A list of instruction represents an Algorithm. Some algorithms doesn't terminate ... that means the results can't be laid in a finit table.

> All reference values, if any, are discrete.

> One of the major difficulties faced by software designers is although they live and work
> in three-dimensional space and multi-threaded time,

... and what is a multi-threaded time ?

> they are constrained to create systems that reside completely within the space-domains
> of computers. These systems must sense and react to real world temporal effects. The
> designers therefore are required to repeatedly translate input information from time to space,

Input information are located in your "space-domain" ... what do you have to translate?

> and translate from space to time for relevant output.

... and what is the representation of a "relevant output" in time ?

> Any temporal operations in between must be performed through space-only
> transformations. It is no wonder these unfortunate software designers often make mistakes.

And hardware designer are working also in the physical space ... that means they have an additional area to make mistakes :)

Sorry for all of my dumb questions.

Best Regards
Armin Steinhoff

By Charles Moeller on 14 February, 2012 - 6:33 pm

Charles Moeller wrote:
>> The biggest problem designers have with control software is the requirement to
>> repetitively translate to and from the space domain.

Armin Steinhoff wrote:
> How do you define the space-domain and the time-domain ?

In a typical process, the real time domain is what the physical process being controlled runs in (actuality: the real stuff of existence). The (artificial) space-domain is what the computers manage: memory, data, control store, instruction counter, address register, data and address busses, etc. "Bits in locations and paths for them."

In a control system, parameters that indicate how the process is doing are repetitively sampled from sensors, digitized and stored in memory spaces. Those values are retrieved under program control and matched against one or more reference "standards." Actions are taken to keep the process working at some defined optimum, dependent upon the results of the comparisons.

All is subject to error. The bits may be awry, the addresses may be off, or the meaning of that sterile data could be interpreted wrongly. Computation, as a means of process control is complex and adds many new components to any control scheme, increasing the risk of faults.

--snip--

Charles Moeller wrote:
>> Logic, software, and computational prowess are concentrated almost wholly in the space-domain.

Armin Steinhoff wrote:
> All of the different temporal logicsare not based on logic? Processing of data at an individual system state
> creates data changes ... that means events!

Great effort is expended to prevent time and temporal effects (change and events) from occurring and affecting computational systems. In any such design, one must adhere, for example, to setup and hold times. Operations are considered to be executed in a null-time zone, as the evaluations are ready at the next live moment (usually at the next clock pulse or instruction), which is designed to occur after any contributing settling or gate-delay times have run to completion.

Charles Moeller wrote:
>> There is no temporal logic that is native to the time-domain. Data is taken from
>> the space- and time-domains (or space-time domain)

Armin Steinhoff wrote:
> ... what is a space-time domain??

Some people like to refer to "space-time" after Einstein's usage. For us ordinary mortals, plain "space and time" is usually sufficient.

Charles Moeller wrote:
>> A list of instructions (space-domain) is applied to the data and it is processed by means of
>> Boolean logic operators (all space-domain transformations that can be laid out in row- and column-truth-tables).

Armin Steinhoff wrote:
> A list of instruction represents an Algorithm. Some algorithms doesn't terminate ... that means the results
> can't be laid in a finit table.

Charles Moeller wrote:
>> All reference values, if any, are discrete.
>> One of the major difficulties faced by software designers is although they live and work
>> in three-dimensional space and multi-threaded time,

Armin Steinhoff wrote:
> ... and what is a multi-threaded time?

I refer to the existence and influence of objects and people on intersecting world-lines.

Charles Moeller wrote:
>> they are constrained to create systems that reside completely within the space-domains
>> of computers. These systems must sense and react to real world temporal effects. The
>> designers therefore are required to repeatedly translate input information from time to space,

Armin Steinhoff wrote:
> Input information are located in your "space-domain" ... what do you have to translate?

Charles Moeller wrote:
>> and translate from space to time for relevant output.

Armin Steinhoff wrote:
> ... and what is the representation of a "relevant output" in time ?

When the computer data is represented by a physical presence (output) in the real world (join the real time domain).

Charles Moeller wrote:
>> Any temporal operations in between must be performed through space-only
>> transformations. It is no wonder these unfortunate software designers often make mistakes.

Armin Steinhoff wrote:
> And hardware designer are working also in the physical space ... that means
> they have an additional area to make mistakes :)

Yes.

Best regards,
CharlieM

Armin Steinhoff wrote:
>> ... and what is a multi-threaded time?

Charles Moeller wrote:
> I refer to the existence and influence of objects and people on intersecting world-lines.

I would never trust control systems based on such a "multi-threaded time " ....

Best Regards
Armin Steinhoff

By Vladimir E. Zyubin on 19 February, 2012 - 9:37 am

Armin Steinhoff wrote:
>>> ... and what is a multi-threaded time?

Charles Moeller wrote:
>> I refer to the existence and influence of objects and people on intersecting world-lines.

Armin Steinhoff wrote:
>I would never trust control systems based on such a "multi-threaded time "....

I personally still think it can be something interesting under "multi-threaded time", we just can not discuss it because of "cognitive dissonance" aggravated by the wishes to patent the idea. Counterproductive way, IMO, but decision is up to the author.

--
best regards, Vladimir

Vladimir:

> I personally still think it can be something interesting under "multi-threaded time", we just can not
> discuss it because of "cognitive dissonance" aggravated by the wishes to patent the idea. Counterproductive way,
> IMO, but decision is up to the author.

As Ayn Rand wrote, "We exist for the sake of earning rewards."

If I don't find interested parties in academia or enterprise, I will eventually make my method public.

Best regards,
CharlieM

By Vladimir E. Zyubin on 20 February, 2012 - 4:27 am

CharlieM wrote:
> As Ayn Rand wrote, "We exist for the sake of earning rewards."

It is up to them to chose their purpose of life.

> If I don't find interested parties in academia or enterprise, I will
> eventually make my method public.

Well, I personally do realise there are a lot of problems with the current linguistic means in automation. And, as I understand, Armin admits the current situation can be improved as well. And there is no need to popularize the idea of changes. So, the question is, what is the changes. If you can not told about it because of "the sake of earning [material] rewards", then... life is very short and there are a lot of other interesting things besides to spend time for decryption of fuzzy allusions that are (I must confess) not understandable to me. As to me, I think, it is more easy to wait for a patent or a publication about the "chrono-synclastic" solution.

with kindest regards, Vladimir

Armin:

Armin Steinhoff wrote:
>>> ... and what is a multi-threaded time?

Charles Moeller wrote:
>> I refer to the existence and influence of objects and people on intersecting world-lines.

Armin Steinhoff wrote:
> I would never trust control systems >based on such a "multi-threaded time "

A pity, as that is what wider words and faster clocks attempt to do, without ultimate success.

Wouldn't it be easier to work in the same environment in which you live?

Best regards,
CharlieM

By Charles Moeller on 9 February, 2012 - 10:39 pm

CWW:

> That argument ignores the reality of economics.

> A different piece of hardware for each application is not feasible in
> comparison to a piece of hardware thatcan do almost anything and software to

---- snip ----

Low-cost configurable hardware in the form of FPGAs would be more appropriate for the low-level applications.

Best regards,
CharlieM

By Curt Wuollet on 10 February, 2012 - 6:32 pm

That's possible, they have been replacing scatter logic for years, but usually only in volume applications.

I'm not sure what magic would change that.

Regards
cww

> Low-cost configurable hardware in the form of FPGAs would be more
> appropriate for the low-level applications.

By Charles Moeller on 11 February, 2012 - 1:48 pm

CWW:

I take it the "scatter logic" to which you refer is what we used to call "random logic."

Curt Wuollet wrote:
> That's possible, they have been replacing scatter logic for years, but usually only in volume applications.

>I'm not sure what magic would change that.

Looking for the magic.

CharlieM wrote:
>> Low-cost configurable hardware in the form of FPGAs would be more
>> appropriate for the low-level applications.

Best regards,
CharlieM

Charles Moeller wrote:

> Low-cost configurable hardware in the form of FPGAs would be more appropriate for the low-level applications.

In general FPGAs are also used in high-level application:
http://www.jucs.org/jucs_9_2/optimized_temporal_logic_compilation

Regards

Armin Steinhoff

PS: what you are trying to do seems to be already done ....

Dick Caro wrote:
>> Charles, for many years process control was supplied by pure hardware solutions
> ---- snip ----
>
>> The optimal control system can no longer be implemented without a very
>> significant amount of software - although some suppliers insist on
>> calling it "firmware." There is no "After Software," in my opinion. More likely - better software.

Charles Moeller wrote:
> All control problems do not require a computer, but that's what is being used in most
> cases, simply because there is no easier or well-documented alternative. Sometimes a
> microprocessor is overkill or not appropriate. But in the cases when you absolutely
> need computing, then you necessarily must have software. I agree that computation has
> allowed us to get to our present technical position, but it is only one means of solving
> control problems. There are other means, one of which I have been addressing.

> Software provides direction to the hardware,

Software can do nothing because it is just a special configuration of a piece of passive hardware ... we call it memory.
I have never seen a passive memory providing direction to the hardware in common. And that is OK so :)

> but hardware actually performs all the functions. Software can only tell the hardware
> what it is time to do.

Again software does nothing .. it must be executed.

> Now if there was a way to build into the hardware a sense of time so that it would
> automatically do what it is time to do,

We have it since more then 50 years ... we call it timer hardware which can create timer events. These timer events are processed by the execution of OS tasks or the execution of an application program.

> we would need less software, or in some cases, none.

You need in all cases well configured and working hardware that fits to the application.
How would you handle hardware errors if all is processed in hardware

> That would be safer, ease the class of faults attributable to software, and cost less to
> build and maintain.

> We have had decades of better software and we still get reports like The Standish
> Chaos Report [a] that shows a (currently) backward trend in software projects' success rates:
> 32% Successful (On Time, On Budget, Fully Functional)
> 44% Challenged (Late, Over Budget, and/or Less than Promised Functionality)
> 24% Failed (Canceled or never used)
> a. http://www.galorath.com/wp/2009-standish-chaos-report-software-going-downhill.php

And what about statistics about hardware projects ?

I have seen a lot of hardware products with a lot of bugs. That hardware bugs could only be fixed by software because the bug fixes at hardware level was to expensive and needed too much time.

> Other reports and articles indicate software production efficiency is only about 50% for
> the successful 32% of projects. That is because as much time is spent on correcting
> software as is spent on its creation.

OK and how much time and money is needed to fix hardware bugs?

Best Regards
Armin Steinhoff

There is not much new in computing except for the increased availability of multicore processors in hardware. Unfortunately, there does not seem to be much in the way of software to really support true parallel processing. In the IEC 61131-3 standard, one of the "languages" listed is SFC, or sequential function charting, which is taken exactly from Grafcet. SFC specifies the sequential nature of events and parallel operations characteristic of most real manufacturing processes. SFC is the language of batch control since it has both serial and parallel processes. PLCs that implement SFC actually simulate the parallel operations on conventional Turing machine microprocessors.

Conventional microprocessors are now available with up to 8 cores, while advanced processors from IBM and others have up to 32 cores. Today, the definition of a Supercomputer is to have many cores, possibly up to 1024 or even more. There are applications for some of these huge parallel processing supercomputers in seismic analysis and atmospheric weather forecasting, but there are no languages developed for their programming. The parallelism for these problems is typically one main program thread used for all cores, while each core operates on a different segment of a database.

I would like to see SFC used as the base programming method for multicore parallel processing where each core operates on an independent thread until they join. I don't know if anyone is doing this, but it seems to be a trend for the future of computing. I see that future more as a strongly hardware-assisted platform for software.

Dick Caro

By Vladimir E. Zyubin on 2 February, 2012 - 10:22 am

Dick Caro> Unfortunately, there does not seem to be much in the way of software to really support true parallel processing.

A lot of... it is just hard to program. Humans need no parallelism, we need independency (to simplify). BTW, the multicore architecture is a result of technological limits on the elements size. As to SFC with the Petri-net roots -- "Some sources [1] state that Petri nets were invented in August 1939 by Carl Adam Petri - at the age of 13 - for the purpose of describing chemical processes."

[1] Carl Adam Petri and Wolfgang Reisig (2008) Petri net. Scholarpedia, 3(4):6477

http://en.wikipedia.org/wiki/Petri_net#cite_ref-0

Zyubin > As to SFC with the Petri-net roots...

Yes indeed, SFC was created at Telemechanique in France. The developers cited it as an implementation of Petri-net. SFC has been adopted by ISA88 as the "preferred language" for programming batch phase logic, the primary element of a batch control program.

Dick Caro

By Vladimir E. Zyubin on 4 February, 2012 - 4:06 am

Dick Caro >Zyubin > As to SFC with the Petri-net roots...

> Yes indeed, SFC was created at Telemechanique in France. The developers
> cited it as an implementation of Petri-net.

I agreed, at that time SFC is most powerful IEC 61131-3 language. But SFC has the same problem as Petri-net... problem with control flow convergence (poor controlled markings). (As well as poor sinchronism, and structurisation). So I am personally don't sure what will be the better solution -- try to solve current SFC problems or just to enhance ST syntax/semantics to add necessary abilities. The last is not difficult. The first can be impossible.

Best regards, Vladimir

By Charles Moeller on 4 February, 2012 - 9:33 pm

The human psychology and physiology is able to handle situational dynamics in a parallel-concurrent manner. The winning tennis professional's returns are never quite the same and occur sometimes at blinding speed, coordinating position on the court and racquet placement, angle of attack (for spin), and force. The highly practiced symphony violinist coordinates bowing and finger placement and in exact phase with the orchestra, no matter what tempo. These kinds of activities take place in concurrent fashion.

Great authors draw their readers into the scenes, painting them in parallel, although the words, sentences, paragraphs, and chapters are set down in serial fashion. Our thought processes and languages allow and support such parallel-concurrent processes described serially.

Computer programs, at first glance appear to be lists of line-by-line activities, the arrays of which might (possibly) be read from left to right a page at a time. But no, the lines of code are read and executed one at a time starting at the top of page one and proceeding downward, in generally the same order in which they were written.

Going back to a well-written book, a human can read about flowers and mentally build a garden based on those words as the description proceeds.

If something changes in the description, the mental image is updated immediately and automatically. That is precisely how a parallel-concurrent control scheme can be developed. Each sub-process contributing to the overall process runs by itself and all run concurrently and contribute to the overall process.

Such controller activities performed on a time-shared basis is what we have now with TMs. What is needed is the parallel-concurrent alternative.

Best regards,
CharlieM

By Vladimir E. Zyubin on 5 February, 2012 - 1:48 pm

CharlieM > The human psychology and physiology is able to handle situational dynamics in a parallel-concurrent manner.
...

Can you prove your words by a link? I am sorry, my investigation on the topic (Miller's law, short-term and long-term memory, software psychology, etc.) leads me to a different conclusion. BTW, I have fixed it in an article about information complexity.

> Such controller activities performed on a time-shared basis is what we have now
> with TMs. What is needed is the parallel-concurrent alternative.

In my opinion, "parallel-concurrent" means just independent... if you independently describe a thread (independently from others threads) you can produce more simple, reliable, readable, maintainable code. In control software parallelism of the physical processes allows programmers to simplify the description. It is absolutely different to other fields of computing (e.g. supercomputer programming) where programmers crack their brains to get a code that will utilize all power of the platform. And the code can be very complex, importable, etc.

By Charles Moeller on 6 February, 2012 - 11:13 am

Vladimir:
> CharlieM wrote: The human psychology and physiology is able to handle situational
>> dynamics in a parallel-concurrent manner.
...
Vladimir wrote:
> Can you prove your words by a link?

You can experience proof for yourself by observing and thinking.

> I am sorry, my investigation on the topic (Miller's law, short-term and long-term
> memory, software psychology, etc.) leads me to a different conclusion. BTW, I
> have fixed it in an article about information complexity.
>
>> CharlieM wrote: Such controller activities performed on a time-shared basis is what we have now
>> with TMs. What is needed is the parallel-concurrent alternative.

Vladimir wrote:
> In my opinion, "parallel-concurrent" means just independent... if you
> independently describe a thread (independently from others threads) you
> can produce more simple, reliable, readable, maintainable code. In control
> software parallelism of the physical processes allows programmers to simplify
> the description. It is absolutely different to other fields of computing
> (e.g. supercomputer programming) where programmers crack their brains to get a
> code that will utilize all power of the platform. And the code can be very complex, importable, etc.

Simple control science is the specific area of my focus. Parallel-concurrent indeed means "mostly" independent, as there could be some parts of some activities that are dependent upon others. The operation of threads via computer, in serial, parallel, or time-shared fashion however, is different from doing any or all of the activities "while" they are all active in configuration-bound hardware. For example, a certain 16-bit adder may be clocked up to 40 MHz, but uses less power and results are available more rapidly if operated in a flow-through manner, performing operations as the operands are presented. Efficiency and speed are gained by local control in hardware over centralized control via software.

Best regards,
CharlieM

Mr. Moeller: Are you advocating some new hardware architecture? It would help this discussion if you would reveal your thoughts.

Many years ago, my colleagues at The Foxboro Company were making many of the same points I see in your discussions. We were exploring the use of Concurrent Pascal (http://en.wikipedia.org/wiki/Concurrent_Pascal) for programming real-time process control systems. I have found that this language was useful for operating systems and real-time systems software, but that SFC (Sequential Function Charts) were more useful in describing the serial/parallel flow of real-time process control applications. As you know, the multitasking capability of real-time operating systems used in PLCs and process control systems easily handle the execution of programs written within the SFC/Grafcet structure.

Dick Caro

By Charles Moeller on 7 February, 2012 - 3:45 pm

> Mr. Moeller: Are you advocating some new hardware architecture? It would help
> this discussion if you would reveal your thoughts.

Mr. Caro: I have been revealing my thoughts over the past week as this discussion has proceeded. The present computer architecture is well suited for linear-sequential instruction-bound control. It supports independent threads only on a time-shared basis (unless you have a processor per thread).

What I have been writing about is a way to configure hardware such that it naturally and automatically performs in a parallel-concurrent manner as needed, instead of periodically under software control. Such arrangements would be classed as non-computational means of control. The architecture would not be fixed, but would be modifiable so as to especially suit each type of application. This new path taken because it seems the Turing-type paradigm has just about run its course.

I do not claim that what we have in computation is not useful, because it is, but there are currently diminishing returns for the increasing trouble of smaller, denser, faster hardware and voluminous software. In order to move on, we (at least some of us) should change direction. This is especially true for the smaller and less demanding control tasks, in which only a few simple functions are required and there is a small number of I/Os. Also in this category are simple control tasks that have safety-, time-, and mission-critical requirements and are not well-served by the expensive GHz behemoth chips.

Best regards,
CharlieM

Dick Caro wrote:
>> Mr. Moeller: Are you advocating some new hardware architecture? It would help
>> this discussion if you would reveal your thoughts.

Charles Moeller wrote:
> Mr. Caro: I have been revealing my thoughts over the past week as this discussion has
> proceeded. The present computer architecture is well suited for linear-sequential
> instruction-bound control. It supports independent threads only on a time-shared basis
> (unless you have a processor per thread).

A multicore CPU with 8 cores supports 8 independent threads. If multi-threading is supported for each core we have additionally at least 8 impendent threads running.

GPUs are supporting hundreds of cores ...

Charles Moeller wrote:
> What I have been writing about is a way to configure hardware such that it naturally
> and automatically performs in a parallel-concurrent manner as needed, instead of
> periodically under software control.

When you use a SBC with an Atom processor which includes a big FPGA on the same chip ... you could use linear-sequential and true configurable parallel execution.

Charles Moeller wrote:
> Such arrangements would be classed as non-computational means of control. The
> architecture would not be fixed, but would be modifiable so as to especially suit each
> type of application. This new path taken because it seems the Turing-type paradigm has
> just about run its course.

> I do not claim that what we have in computation is not useful, because it is, but there
> are currently diminishing returns for the increasing trouble of smaller, denser, faster
> hardware and voluminous software. In order to move on, we (at least some of us)
> should change direction. This is especially true for the smaller and less demanding
> control tasks, in which only a few simple functions are required and there is a small
> number of I/Os. Also in this category are simple control tasks that have safety-, time-,
> and mission-critical requirements and are not well-served by the expensive GHz
> behemoth chips.

It's already done by processing boards with multible FPGAs ... or a network of FPGA based processing node.

Best Regards
Armin Steinhoff

By Charles Moeller on 9 February, 2012 - 10:25 pm

Moderator's note: This message originally quoted the entire message in post http://www.control.com/thread/1327707041#1328688315. Rather than reproduce it here, please refer to it.

---- snip ----

Charles Moeller wrote:
>> I do not claim that what we have in computation is not useful, because it is, but there
>> are currently diminishing returns for the increasing trouble of smaller, denser, faster
>> hardware and voluminous software. In order to move on, we (at least some of us)
>> should change direction. This is especially true for the smaller and less demanding
>> control tasks, in which only a few simple functions are required and there is a small
>> number of I/Os. Also in this category are simple control tasks that have safety-, time-,
>> and mission-critical requirements and are not well-served by the expensive GHz
>> behemoth chips.

Armin Steinhoff wrote:
> It's already done by processing boards with multible FPGAs ... or a network of FPGA based processing node.

Armin:

At great cost. Suppose there was a better hardware language that incorporated time such that the hardware would know when to perform its functions without software. Suppose this temporal logic hardware could be implemented in a small portion of a small FPGA. The combination could be applied to low-level applications like toasters, fuel injection systems, braking systems, and simple assembly and fabricating machine automation for pennies. In the case of plant automation, the plant electrician or engineering technician could configure the chips. Third-world countries could automate factories without being dependent upon computer software and hardware experts.

Best regards,
CharlieM

[ clip]
> Armin Steinhoff wrote:
>> It's already done by processing boards with multible FPGAs ... or a network of FPGA based processing node.

Charles Moeller wrote:
> At great cost.

That's not the case. It is possible to buy a FPGA board with a e.g. Spartan 6 FPGA for less than $100.

> Suppose there was a better hardware language

Sorry what is a hardware language? I know only hardware description languages like VHDL e.g.

> that incorporated time such that the hardware would know when to
> perform its functions without software.

The operation of a FPGA is based on software ... it's called firmware and is located mostly in serial EPROMS.
But the firmware will not be executed by a MCU. However ... nothing works without software :)

> Suppose this temporal logic hardware could be implemented in a
> small portion of a small FPGA. The combination could be applied to
> low-level applications like toasters, fuel injection systems,
> braking systems, and simple assembly and fabricating machine automation for pennies.

Pennies? Hardware and ist replications cost much more.

> In the case of plant automation, the plant electrician or
> engineering technician could configure the chips. Third-world
> countries could automate factories without being dependent upon computer software and hardware experts.

That's exactly what this company is already doing ...http://www.mnbtech.com

Best Regards
Armin Steinhoff

Mr.: Moeller:

You said
> What I have been writing about is a way to configure hardware
> such that it naturally and automatically performs in a parallel-concurrent
> manner as needed, instead of periodically under software control. Such
> arrangements would be classed as non-computational means of control. The
> architecture would not be fixed, but would be modifiable so as to especially
> suit each type of application. This new path taken because it seems the
> Turing-type paradigm has just about run its course.

Have you applied for a patent on this new hardware class? I would enjoy hearing more about it once you have protected your invention.

Dick Caro

By Charles Moeller on 9 February, 2012 - 10:32 pm

Mr. Caro:

> Have you applied for a patent on this new hardware class? I would enjoy
> hearing more about it once you have protected your invention.

Working on it. Looking for support.

Best regards,
CharlieM

By William Sturm on 6 February, 2012 - 3:58 pm

CharlieM wrote: <<Efficiency and speed are gained by local control in hardware
over centralized control via software>>

While that may be true, I believe the software to make all of those parts work together as an integrated system could get very complicated. I would like to think that there is a simple way, but I cannot imagine it at this time.

An example of concurrent hardware is a microcontroller with integrated peripherals. The TI MSP430 can go into a low power sleep mode and it's A/D, Counters... can continue to do their job and awaken the CPU when needed. It is a very neat chip, actually.

Bill Sturm

By Charles Moeller on 7 February, 2012 - 3:56 pm

Bill:

CharlieM wrote:

>> Efficiency and speed are gained by local control in hardware
over centralized control via software

Bill Sturm wrote:
> While that may be true, I believe the software to make all of those parts work
> together as an integrated system could get very complicated. I would like to
> think that there is a simple way, but I cannot imagine it at this time.

The arrangement of which I wrote (specifically, the 16-bit adder) used no software and no clock. It could have been clocked up to 40MHz, but it had a flow-thru option which didn't require a clock. When using this clock-less option, the two 16-bit operands were simply presented at the input ports and after about 20 nsec. the sum was stable and available at the output port.

> An example of concurrent hardware is a microcontroller with integrated
> peripherals. The TI MSP430 can go into a low power sleep mode and it's A/D,
> Counters... can continue to do their job and awaken the CPU when needed. It is a
> very neat chip, actually.

Yes, a great chip! Part of it can operate independent of the TM! Now that is the part I like most!

Best regards,
Charlie

First, there are, and have been for decades, microcontrollers that can present summed values in very short time intervals. So, while having an unclocked 16 bit adder might seem all new and clever, they aren't, and they aren't going to solve any of the problems that are being discussed.

One item I've not seen beaten to death, like the dead horse that it should be regarded as, is the issue of the qualifications of "programmers". And this, in my 33 years of experience as one, plus about 20 years making programmers do my evil bidding, is where the problem lies. Or, rather, the problem of MANAGERS deciding that Bill, who just got his degree last month in WhizBang Programming Language, is as good a programmer as Sue, who learned some other language, five or 10 years earlier. Mostly because Bill is cheaper than Sue -- that's the real reason many managers like to pretend that the Bills of the world are as good (or better) as the Sues of the world.

The core of the "software problem", in this problem domain, is that "programmers" are neither scientists nor engineers. Many are little more than glorified typists, and many of the glorified ones have atrocious problem solving skills, which is likely why they are neither scientists nor engineers.

When I was studying Mechanical Engineering (my minor at uni) and working for Marine Engineers (how I paid to be at uni in the first place ...) the amount of "testing" that was performed, either for real or with models, far exceeded what I was being taught over in the Computer Science department. If I was designing your basic Warren Truss, or calculating some parameters of a fuel or water tank on a ship, by the end of the exercise I had a very well-defined "thing". I knew where the forces in my truss came from and went to, or I know how my tank was going to affect the ship as it did whatever it did with however much of whatever liquid was in it. My models were tested against reality -- my first ship design project accurately predicted that a 200 ton piece of steel was going to float about 6' deep in the water, with the stern a few inches lower than the bow. They put it in the water 2 years later, and it didn't roll over or stand on one end or the other and sink. THAT is engineering. BTW, I was an undergrad during that particular feat of engineering.

Not at all so for "computer science" / "software engineering".

Efforts at getting programmers to think in terms of "fully describing the problem" are futile because "right" and "good enough" are too far apart in terms of cost -- it's the 90/10 rule. Ninety percent of the code is written in 10 percent of the time. Which means, it takes about an order of magnitude longer to finish that last 10 percent. Which is the 10 percent that makes sure the other 90 percent is working properly.

Efforts at solving the problem revolve around kicking the can down the road, instead of kicking the programmers out the door. Which gets back to the "marketing" thing -- if the cost, net of corporate jets and free soda and pizza for lunch, of "Bill O/S 1.0" is $50, you can forget trying to convince anyone that in four or five years, when the last of the bugs have been worked out of "Bill O/S 1.5 Update 7", they should pay $200 for it. Especially since they can now buy "Bill O/S 3.0" for $50, complete with all the newest bugs that won't be fixed for another four or five years.

Don't believe me? Check out what Microsoft has been doing with Windows XP, Vista and now 7 (and 8 is in the wings). Which is more robust? Which can you readily buy? What would Windows XP actually cost if Microsoft had to keep fixing all the bugs, without adding all the newest features that would drive revenue from the "gotta have the sexy features!" crowd?

And I'm going to wrap this up right about now.

My earliest professional programming gigs, before I decided to be a professional programmer for real, were all Marine Engineering related. Most ships are designed according to a set of rules from the American Bureau of Shipping. They had giant books filled with rules for every aspect of a ship. And when something broke, they'd figure out why, and come up with a new rule, and hopefully things didn't break (and people didn't die) the next time a ship got built. Ship designs were not based on "sexy". The process was not "marketing department driven". The process was based on successive refinement with feedback from real-world experiences. No one was designing "transparent overlay with animation" water-tight bulkheads. But mostly they weren't hiring Music majors (I had one once on my staff ...) to design ships because they could hold a pencil or move a mouse.

The software problem is NOT software. It exists because "good enough" has become the standard against which completion is measured, and "sexy" is the standard against which "better" is measured, and "cheaper" is the standard against which "value" is measured. Verified designs and "provably correct" don't even enter into the picture.

Julie in Austin:

> The software problem is NOT software. It exists because "good enough" has
> become the standard against which completion is measured, and "sexy" is
> the standard against which "better" is measured, and "cheaper" is the standard
> against which "value" is measured. Verified designs and "provably correct"
> don't even enter into the picture.

I appreciate your viewpoint.

The complicating factor of translation from the real to the artificial spaces of computer memory and back to the real after processing is the hidden problem I am addressing.

Since all software is in the space-domain:

- pick X from there
- place it here
- transform it thus
- put the result there
- pick Y from other place
- decrement it by one
- place it in new space
- ...

but our control problems exist in real space and time, there is a required translation from the real to the artificial via placement in memory. In that constructed universe mediated by numbers and arithmetic, the properties of time are generally taken to be the same as, and in fact are mapped onto, a fourth spatial dimension having the general character of extension, or length. We use counting mechanisms to translate time-ticks into the space domain, as can be seen on the faces of our clocks or the addresses of our control store. This practice fits nicely into arithmetic computers, but it adulterates and obscures the true character of the time domain.

The combination of artificial, predetermined dimensions of space and time, and the limitations of arithmetic operations, force one to record and determine the conduct of processes using successive frames or snapshots according to preselected measurement quanta. The digitization of the space-time functions of a process, however, forever sunders the co-mingled space-time continuum into separate spatial and temporal parameters, which are ultimately relegated to signed and numbered tokens in the space domain. Once those parametric relations are separated, extraordinary measures must be taken, via complex algorithms, to extract meaning from them. Critical inquiry concerning an event requires comparison between stored frames after the fact, and the development or discovery of suitable relationships that could have produced a given frame from its predecessors. Assumptions based upon experiential knowledge are often applied to these phenomena to unravel the quandaries.

In digital process monitoring and control, as it is presently practiced, continuous natural time cannot be accommodated. Time, consequently, is reckoned as successive snapshots of a process in space, with the temporal intervals between frames preselected to be small enough, hopefully, to monitor and control the process adequately. If one visits and records every sampling point and frame, one need not think about the process, except in retrospect. In current digital controllers, data is collected, then decisions are made. The response always occurs well after the event and not concurrently with it. While the computer is busy processing some previously acquired data, it is virtually blind to other occurrences in real time. Reliance upon this mode of operation is a program for disaster, given the possibility of a missed event or failed component. Chaos lurks.

Best regards,
CharlieM

As I see it, the "software as in the bits and bytes needed to make a computer work" is confused with the "software as in how do I make this system do what I want?".

The difficulty in developing solutions to control problems is in first defining exactly what the problem to be solved is. The definition may well involve physical, mechanical or chemical effects as well as time and numerical parameters. Defining the problem and specifying what is an acceptable response has to be done by people who know what they want the system to do - the end-users (operators and plant managers). This problem definition has then to be conveyed to those who develop the control solution. Once the solution has been developed, exact details of its capabilities and limitations need to be passed back to the end-users for validation and verification.
A control solution has to be "complete, concise, and clear" - the latter word applying to all parties involved in its specification, design, operation and maintenance. So one absolute essential of a control system specification is that it must be understandable by all involved - not just an elite few eggheads who get obsessed by fine detail to the exclusion of the overall performance.

There is enough difficulty in people unambiguously interpreting the specifications using the existing limiter capabilities of Boolean logic. Throw a few strange symbols such as are found in some of the logic reference quoted and this will be much harder.

Note that nowhere above have I referred to "hardware" or "software" solutions - this problem exists whatever the solution format adopted. One of my first tasks as a graduate engineer was to translate the relay wiring diagrams on 40-odd sheets of A2 into a format that could be understood by the operators and maintenance staff - the final format was about 3 sheets of Function Block Diagrams similar to the IEC61131 format. I have found on a number of occasions that a combination of the FBD and SFC formats meets all needs and is quite easily interpreted by most people with a minimum of training in how to read them.

Once the problem definition has been sorted out, it can be passed over to the software jockeys to crunch out the code - or over to the hardware whizzes to put into a hardware format. If the first part is done as it should be, the code or hardware configuration should fall out of the functional specification. It is when the coders or detail designers begin to impose their own ideas on to the solution (often without a full understanding of the issues involved) that things start to turn to custard.

Bruce Durdle:

> As I see it, the "software as in the bits and bytes needed to make a computer
> work" is confused with the "software as in how do I make this system do what I want?".

There shouldn't be any confusion with:
1. the OS, being that software needed to make the shared hardware act like a Turing-type machine (TM), good for acquiring and shuffling data around, and

2. the application, which makes the TM act like a process monitor-controller.

The difficulties with software, I found, are due to the exclusive use of the Turing paradigm. All of software, its rules, its complexities and faults derive from the restriction to Turing's approach and method (computation). Software-mediated response is always after-the-fact, as it addresses the control situation after the best moment for action has passed. Modern digital control activities are never direct, but depend upon the integration and coordination of at least four separate systems:

* the physical process to be monitored and controlled

* the electronic hardware: microprocessors, sensors and effectors

* the operating system (OS) that enables the available electronic hardware functions to be accessed and exercised on a shared basis

* the application software, a series of instructions that tells the hardware what it is time to do

In some ways, the goals and activities of these support systems are in competition for resources they must share. All efforts to date in the field of computational control systems have addressed the various problems and difficulties that exist and which are created, in part, by this limited choice of the Turing method. Turing-type machines and their necessary software are constrained to work in the space-domain, while the physical processes we wish to control inhabit the domains of space and time. It has proved to be cumbersome, inefficient, and unsafe to effect process control in natural space-time with tools that only work in and upon space, such as microprocessors and software. The required translation of temporal concepts, relationships, and actions to the space-domain (before they can be operated upon by the space-only logic operators) and back again to the time-domain for useful output, only adds to system complexity and difficulty.

The Turing treatment produces a number or condition (or series of numbers or conditions) as its salient output by performing static transformations and translations. The manner in which the numbers or states so produced relate to the physical process being controlled must be determined and referenced by the programmer, who uses lookup tables and numerical and conditional benchmarks for comparison at selected points of the process.

My solution, PTQ, is an alternative mechanical reasoning system for process control that is simpler and more direct than the Turing paradigm. The PTQ method generates a process in its logic elements that takes its cues from, and mirrors, the real physical process being monitored and controlled. The physical (real world) and electronic (ideal) processes are easily compared in a continuous manner for correspondence. Differences can cause process suspension, correction on-the-fly, or an alarm to be raised. The new method works natively and directly in each of the domains of space, time, and (joint) space-time, without translation.

The primitive static operators AND (conjunction) and NOT (negation), and STORE (memorize) in combinations and sequences are necessary and sufficient to generate the whole of computer science. PTQ has grown beyond computation by incorporating, in corresponding hardware logic, seven more primitive operators. These additional operators and their functions describe activities and reactions in the time-domains. They are dynamic operators that, in combination with the conventional static AND and NOT, make it easier to "tell the process stories" (specify processes). Physical processes can therefore be described in more appropriate and natural language which enables one to monitor and control them automatically without run-time software.

Real-time and naturally parallel-concurrent, the electronic hardware corresponding to the additional dynamic operators acts with the process being controlled as-it-happens vs. after-the-fact, as do software-mediated controllers.

In conventional control practice there are four systems that interact, often in competitive ways, as mentioned above. The PTQ method has just two systems working hand-in-glove:

* the physical process and

* the PTQ real-time process monitor-controller

The description of the correct process controller is simply an accurate specification of the physical process being controlled. Using PTQ terminology, dynamic concepts specified for the process are easily implemented in corresponding defined hardware logic elements. The number of languages used in system specification and implementation is limited to one, that being English (which is constrained to the specified operators). A PTQ controller's architecture is expressly suited to the process being controlled because it emerges from the process specification. A change made to the process specification automatically re-determines the logic elements to be used and modifies the controller architecture as appropriate when instantiated. PTQ monitor-controllers are mostly reactive electronic hardware systems that continuously verify the correctness of their own activities and those of the physical processes being monitored and controlled.

PTQ is a more natural and fundamental means of specifying, monitoring, and controlling physical processes than is computing. Since the operators in PTQ also include those which are necessary for computation, there is nothing lost but much to be gained through its use for physical process control. Among the advantages are increased safety, ease of use, simple concepts able to be quickly and easily implemented in corresponding logic element hardware (in FPGAs), natively parallel-concurrent and real-time operation, easy modifications or upgrades via changes to the specification, less hardware, flexible architecture, little or no run-time software, and faster response.

Mainstream thinking leans toward preserving that in which it has already invested so much. As a result, the software industry is still looking for the be-all and end-all "super-software"-a much-improved Turing-type machine-not a better and more fundamental approach like ALS (Westinghouse) or PTQ. At the very least, PTQ can supervise physical processes in ways that are more efficient and not subject to the problems of software.

Best regards,
CharlieM

[clip]
Charles Moeller wrote
> Mainstream thinking leans toward preserving that in which it has already invested so much. As a result,
> the software industry is still looking for the be-all and end-all "super-software"-a much-improved
> Turing-type machine-not a better and more fundamental approach like ALS (Westinghouse) or
> PTQ. At the very least, PTQ can supervise physical processes in ways that are more efficient and not subject to the problems of software.

Every FPGA based control system is software based ... this software is called firmware which can includes lots of faults.

Best Regards
Armin Steinhoff

Vladimir:

CharlieM wrote:
>> As Ayn Rand wrote, "We exist for the sake of earning rewards."

Vladimir Zyubin wrote:
> It is up to them to chose their purpose of life.

CharlieM wrote:
>> If I don't find interested parties in academia or enterprise, I will
>> eventually make my method public.

Vladimir Zyubin wrote:
> Well, I personally do realise there are a lot of problems with the current linguistic means in automation. And, as
> I understand, Armin admits the current situation can be improved as well. And there is no need to popularize the idea
> of changes. So, the question is, what is the changes. If you can not told about it because of "the sake of earning
> [material] rewards", then... life is very short and there are a lot of other interesting things besides to spend time
> for decryption of fuzzy allusions that are (I must confess) not understandable to me. As to me, I think, it is more
> easy to wait for a patent or a publication about the "chrono-synclastic" solution.

Thank you for your patience.

Best regards,
CharlieM

By Vladimir E. Zyubin on 21 February, 2012 - 9:37 am

Vladimir Zyubin wrote:
>> As to me, I think, it is more easy to wait for a patent or a
>> publication about the "chrono-synclastic" solution.

CharlieM wrote:
>Thank you for your patience.

Charlie: I really wish you the best, and I have no problem with the discussion, the joke was about the word "wait".. to demonstrate my ability to operate in terms of a temporal logic. :-)

best regards,
Vladimir

Vladimir:

Vladimir Zyubin wrote:
> Charlie: I really wish you the best, and I have no problem with the
>discussion, the joke was about the word
> "wait".. to demonstrate my ability to operate in terms of a temporal logic.

Right. WHILE you WAIT, we can CONTINUE to enjoy the discussion.

Best regards,
CharlieM

Armin,

Charles Moeller wrote:
>> At the very least, PTQ can supervise physical processes in ways
>> that are more efficient and not subject to the problems of software.

Armin Steinhoff wrote:
> Every FPGA based control system is software based ... this software is
> called firmware which can includes lots of faults.

Your statement, "Every FPGA based control system is software based ..." is not strictly correct.

It is true that a computer-based software system is used to configure FPGAs, but only in certain cases is run-time software used to activate FPGA functions. In those cases, the FPGA (or part of it) has been configured to run as a TM-type machine.

In the monitor-controllers I design, Xilinx XPLA Professional software (or a current version) is used to program (configure) the logic elements and interconnection pattern in a Cool Runner complex programmable logic device. The resulting hardware configuration runs by itself, given the appropriate stimuli. There is no need for run-time software. Systems such as these are not "software based" i.e., they do not run on software, although they are software-configured to be sure.

Best regards,
CharlieM

Charles Moeller wrote:
>>> At the very least, PTQ can supervise physical processes in ways
>>> that are more efficient and not subject to the problems of software.

Armin Steinhoff wrote:
>> Every FPGA based control system is software based ... this software is
>> called firmware which can includes lots of faults.

Charles Moeller wrote:
> Your statement, "Every FPGA based control system is software based ..." is not strictly correct.

> It is true that a computer-based software system is used to configure FPGAs, but only in certain cases is
> run-time software used to activate FPGA functions.

After a cold start of a FPGA based system you have to upload in all cases the firmware to the FPGAs in order to configure the FPGA. This firmware is stored in serial EPROMS or flash memories. The firmware is software and can include a lot of failures which are also hard to fix.

> In those cases, the FPGA (or part of it) has been configured to run as a TM-type machine.

> In the monitor-controllers I design, Xilinx XPLA Professional software (or a current version) is used to program (configure)
> the logic elements and interconnection pattern in a Cool Runner complex programmable logic device. The resulting
> hardware configuration runs by itself, given the appropriate stimuli. There is no need for run-time software. Systems such as
> these are not "software based" i.e., they do not run on software,

The software is represented as links between the gates of the FPGA ... if one link is wrong the hardware will not work as expected.

Best Regards
Armin Steinhoff

Armin:

Armin Steinhoff wrote:
>>> Every FPGA based control system is software based ... this software is
>>> called firmware which can includes lots of faults.
--snip--
> The software is represented as links between the gates of the FPGA ... if one
> link is wrong the hardware will not work as expected.

It may help you to think of the interconnections as hardware switch settings. These can just as well be fuse links or non-volatile memory cells (vs. SRAM) that are selectively blown or set to make the interconnection pattern.

Best regards,
CharlieM

Armin Steinhoff wrote:
>>>> Every FPGA based control system is software based ... this software is
>>>> called firmware which can includes lots of faults.
> --snip--
>> The software is represented as links between the gates of the FPGA ... if one
>> link is wrong the hardware will not work as expected.

Charles Moeller wrote:
> It may help you to think of the interconnections as hardware switch settings.

The interconnections are defined by software ... e.g. by values stored in SRAM cells of the FPGA. The means the software is distributed over thousands of cells of the FPGA device.

And this software defines the behavior of the hardware .... without that software the hardware of a FPGA is dumb piece electronics.

Best Regards
Armin Steinhoff

Some additional comments ...

> [clip]
Charles Moeller wrote:
>> Mainstream thinking leans toward preserving that in which it has already invested so much. As a result,
>> the software industry is still looking for the be-all and end-all "super-software"-a much-improved
>> Turing-type machine-not a better and more fundamental approach like ALS (Westinghouse) or
>> PTQ. At the very least, PTQ can supervise physical processes in ways that are more efficient and not subject to the problems of software.

Armin Steinhoff wrote:
> Every FPGA based control system is software based ... this software is called firmware which can includes lots of faults.

IMHO ... the subject of this communication thread is wrong. There will be always software necessary even if you program "programmable hardware".

Under development are programming languages which are able to express timely dependencies:
Giotto (http://embedded.eecs.berkeley.edu/giotto ... time triggered) or
Lustre (http://www-users.cs.york.ac.uk/~burns/papers/lustre.pdf or
http://www-verimag.imag.fr/~halbwach/PS/tutorial.ps ... with some elements of temporal logic)

A list of Synchronous Languages:
http://rtsys.informatik.uni-kiel.de/teaching/ss08/v-synch/lectures/index.html#lecture16

Secure and time oriented languages are available for the production of safe software ...

Best Regards
Armin Steinhoff

Armin,

--snip--

> IMHO ... the subject of this communication thread is wrong. There
> will be always software necessary even if you program "programmable hardware".

You have a point, Armin. I agree there will always be software.

Next time the thread will be more appropriately titled: "After run-time software, what's next?"

Best regards,
CharlieM

Mr. Moeller:

Finally, you have revealed some details about your invention - the PTQ engine. We know it is not based on a Turing Machine model, but we are still in the dark about its suitability to solve real process control problems.

Your claims are substantial, if they can be proven. Are you prototyping the PTQ engine anywhere? Do you have demons ratable results? Can you supply more details on the operators beyond AND and NOT functions.

We hear some rather large claims, but see no proof.

Dick Caro
Richard H. Caro, CEO, CMC Associates
Certified Automation Professional (ISA)
Buy my books at the ISA Bookstore:
Wireless Networks for Industrial Automation
Automation Network Selection
Consumers Guide to Fieldbus Network Equipment for Process Control
===============================================================

By Charles Moeller on 25 February, 2012 - 1:00 am

Dick Caro and Vladimir,

Dick said:
> Finally, you have revealed some details about your invention - the PTQ engine.

It is not an "engine" as is the TM, but a means of assembling concepts in free-form to suit the physical process being monitored and controlled.

On the matter of concepts: Predators have an inborn tendency to chase that which scurries, runs, or drives away (a commonsense description of prey). (Dogs therefore chase cars unless trained out of it.) Predators also have a proclivity to toy or play with what they catch. Guess what: the "toy" breaks and leaks blood --- yum, yum. A simple instinctive trait sets the stage. Concepts come later with experience and improve the odds of successful hunting.

Our logic languages have not evolved in pace with our human conceptual abilities. We can think on dynamical concepts such as motion, continuity over time, and other concepts that include the inherent notion of change (there is no perception of time without change). Our formal logic, however, can only describe static (timeless) conditions, or if dynamic ones, they must be able to be characterized by static, timeless labels. These are capable of being manipulated by static transformations or translations performable via lookup tables (the Chinese Room analogy).

For example, the concept "is raining" can be held as a label and matched for truth value to various other labels describing the weather ("sun is shining," "is foggy," "is snowing," "is raining") until one is found (or happens) which matches the specified statement. This is not dynamic logic, it is only static logic used as a poor substitute for a proper dynamic logic. A proper logic would examine the environment, determine the process underway, and assign the appropriate term(s).

The Boolean logic used in our computers is no less static than is predicate logic, propositional logic, first-order logic (FOL), or the like. Even Boolean-sequential logic is a prescribed series of static logic operations (an algorithm) performed at the tick of the clock or by instructions in step-by-step fashion. Informal Boolean-sequential logic recognizes only one action or activity: STORE, which is used to memorize a state or condition, or to convert an event in time to a value and place in memory (time to space conversion).

My improvements in logic decrease the necessity to abstract dynamic situations to a series of static pictures or states. PTQ does this by enabling duplication of event relationships in the domain of native time (in which all pertinent events happen in an ongoing process). Admittedly, my logic language and algebraic notation is an abstract representation, but it is a representation that is closer to reality than is the common practice of pegging the dynamic activity to a series of static pictures. Except for a straight-forward recording of events in real time, the only acceptable means is the linear-sequential one mediated by clocks and software instructions.

Under the present usage, logic can't capture "continuity" or "persistence" as themselves, but only as static labels. This is because there is nothing in the logic that is either continuous or persistent. Everything is relegated to static labels in individual frozen frames. The labels are managed in much the same manner as is the manipulation of tiles or dominoes on a plane surface or as is mail being sorted to a grid of stacked cubbyholes.

Regarding motion, physics can describe velocity or acceleration of a body at a point, or as an average over an interval. Any analysis or evaluation to arrive at a descriptive value, however, is a static transformation that results in a discrete, or fixed, number. The Newton-Leibniz calculus can describe the whole dynamic trajectory, but whenever evaluated, settles on a discrete, static numerical value.

So you can see that neither our logic nor our mathematics has evolved in pace with our human conceptual abilities. I am doing something about the logic aspect. Someone else will have to devise the mathematics to follow these new logical concepts. (more to follow)

Best regards,
CharlieM

By Charles Moeller on 25 February, 2012 - 12:19 pm

Dick Caro and Vladimir E. Zyubin,

The concept of order has meaning in the space-domain. We order spaces and assign numbers to them on our measuring sticks, our memory cells, and the dials of our clocks. We can determine the direction and amount of progress by whether the numbers are increasing or decreasing, and by how much. Our program counters access instructions in predetermined orders, usually consecutive.

The concept of order also has meaning in the time-domain. Temporal order is the sequence of events. The order of events determines the character of the process and its results. If the order of events goes awry, the process fails. A useful measure of a process, therefore, is the correct sequence of process events. If we want to determine the temporal order of two signals marking events, A and B, we have several options:

In Turing-type systems, sequence-monitoring must be performed via translation from the time-domain to the space-domain. The signal from A is sampled, the signal from B is sampled. When A occurs, a time-stamp is recorded in a designated location and A-sampling is suspended. When B occurs, a time stamp for it is recorded in a different location and B-sampling is suspended. When both locations have time stamps, they are differenced. The sign of the difference is used to determine the order of events.

In PTQ systems, a logic operator and corresponding logic element directly determines the temporal order (sequence) of two events. The operator (A SEQ B) discriminates the order of inception of conditions A and B to signify when it is the case that A goes high first, then B goes high. The lasting condition (output high) continues in time until purposely reset (returned to a low condition). The operator (B SEQ A) discriminates the order of inception of conditions A and B to signify when it is the case that B goes high first, then A goes high. The lasting condition (output high) continues in time until purposely reset (returned to a low condition). The dynamic SEQ operator and its corresponding hardware logic elements have been configured to sense the order of inception of conditions in the continuous-time domain. No logic designed for static evaluation can accomplish that task in such a straight-forward manner. Indeed, if a TM-type system is required to determine the temporal order of the inception of diverse conditions, it must sample and time-stamp all input conditions, then perform arithmetic procedures on those time-stamps after-the-fact to determine the order in which it received the conditions.

Best regards,
CharlieM

By Vladimir E. Zyubin on 27 February, 2012 - 6:33 am

CharlieM wrote:

> The concept of order has meaning in the space-domain. We order spaces and assign numbers to them on our measuring
> sticks, our memory cells, and the dials of our clocks. We can determine the direction and amount of progress by
> whether the numbers are increasing or decreasing, and by how much. Our program counters access instructions in
> predetermined orders, usually consecutive.

According to my experience the order problem is not a problem at all. Would you like to provide an example from the control algorithm domain?

BTW, I have just read the article [1]. Alas, such an academic paper looks like scholastic exercises produced by people that never seen any control task. OK, I like fig. 4, mostly because it looks like borrowed from one of articles on process-oriented programming :-)

best, Vladimir

1. Organizing the Aggregate: Languages for Spatial Computing
Jacob Beal (Raytheon BBN Technologies, USA),
Stefan Dulman (Delft Univ., the Netherlands),
Kyle Usbeck (Raytheon BBN Technologies, USA),
Mirko Viroli (Univ. Bologna, Italy),
Nikolaus Correll (Univ. Colorado Boulder, USA)

http://arxiv.org/pdf/1202.5509v1.pdf

By Vladimir E. Zyubin on 20 February, 2012 - 6:23 am

Bruce Durdle wrote:

> The difficulty in developing solutions to control problems is in first defining exactly what the problem to be solved
> is. <...> Defining the problem and specifying what is an acceptable response has to be done by people who
> know what they want the system to do - the end-users (operators and plant managers).

You point out the key feature. Any controlled object has control algorithm that is defined during the design process... before the controlled object is made. So, people who know what the system must to do are inventors and designers.

> Note that nowhere above have I referred to "hardware" or "software" solutions

It is quite obvious, the implementation ("how to do") problem is a second-order problem, the first-order problem is to express "what to do" in a maximal seamless form (lack of "seams" between the designer way of thinking and the program form).

> I have found on a number of occasions that a combination of the FBD and SFC formats meets all
> needs and is quite easily interpreted by most people with a minimum of training in how to read them.

Agreed. SFC has features that are close to those that should be, but the way the designer think is a bit differ from SFC conceptual means...
FBD (the data flow concept) has very limited applicability area. IMO.

best regards, Vladimir

Vladimir,

You wrote:
> It is quite obvious, the implementation ("how to do") problem is a second-order
> problem, the first-order problem is to express "what to do" in a maximal
> seamless form (lack of "seams" between the designer way of thinking and the
> program form).

That is a very nice way of expressing the real problem, Vladimir.

I object to the method of TM, shared resources, and software because it is several steps removed from reality. Using computation, we act on the value and locations of tokens that supposedly are good representations of samples taken from the process, but we can not be completely assured that is the case. TM processing puts many many components and factors subject to faults between the real process and that monitoring and control means.

I have devised a better way of specifying processes that can be directly implemented in hardware that performs immediately in a stimulus-response manner. My method depends only upon precise specification of the actual process in terms of the allowable PTQ operators.

Best regards,
CharlieM

By Vladimir E. Zyubin on 21 February, 2012 - 9:17 am

CharlieM wrote:
> I object to the method of TM, shared resources, and software because it is several steps removed from reality.

OK. The "TM metod" is bad, but the modern computer architecture does not prevent to use other methods, based on lambda-calculus, for example.

> I have devised a better way of specifying processes that can be directly implemented in hardware that
> performs immediately in a stimulus-response manner. My method depends only upon precise specification
> of the actual process in terms of the allowable PTQ operators.

Well, I do not know what the PTQ operators are, but it does not matter, because the right question is, can the PTQ operators be implemented on the modern computer architecture or cannot.

If the PTQ operators can be implemented on the architecture, then your invention can be divided on two parts: a way of specification and a way of implementation. And the parts should be discussed separately ("divide and conquer" principle). If they cannot be implemented, then it is a real academic result, and you need no to think about money and job at all, because you could earn a lot of money as an invited lecturer.

best regards,
Vladimir

Vladimir,

CharlieM wrote:
>> I object to the method of TM, shared resources, and software because it is several steps removed from reality.

Vladimir Zyubin wrote:
> OK. The "TM method" is bad, but the modern computer architecture does not
> prevent to use other methods, based on lambda-calculus, for example.

Lorenzo Church developed lambda calculus, a mathematical system for defining computable functions. It is a model of computation equivalent in power to (and has the limitations of) the Turing machine.

CharlieM wrote:
>> I have devised a better way of specifying processes that can be directly implemented in hardware that
>> performs immediately in a stimulus-response manner. My method depends only upon precise specification
>> of the actual process in terms of the allowable PTQ operators.

Vladimir Zyubin wrote:
> Well, I do not know what the PTQ operators are, but it does not matter,
> because the right question is, can the PTQ operators be implemented on the
> modern computer architecture or cannot.

Every prior method of specifying control systems (RTL, fuzzy logic, neural nets, etc.) has ultimately been implemented in or through computers, thereby assuming the impediments and shortcomings of the Turing paradigm (TMs with shared resources and software).

PTQ is not a programming technique or language, it is a hardware language that can transform a process specification into hardware that can monitor and control that process. PTQ is a different method of specifying and implementing processes and process controllers. It does not use a fixed architecture, but is free-form and allows a selection of defined special logic elements. That's why a "sea-of-gates" FPGA is especially suitable for PTQ implementation in hardware.

PTQ operators do not need the massive and complex computer architecture for support, as its operators are not "interpreted." They convey the literal meanings of their names, which activities are recognized or enacted in their corresponding hardware logic elements and architecture. Simulation of these operators/elements/functions by means of computer architecture and software would cause the loss of their benefits by orders of magnitude.

Vladimir Zyubin wrote:
> If the PTQ operators can be implemented on the architecture, ...

Doing so would be to lose most of the benefits of PTQ and assume the impediments of computation.

Vladimir Zyubin wrote:
> If they cannot be implemented, then it is a real academic >result, and you need no to think about
> money and job at all, because you could earn a lot of money as an invited lecturer.

I would like to do some of that.

Best regards,
CharlieM

By Vladimir E. Zyubin on 22 February, 2012 - 2:19 pm

Vladimir Zyubin wrote:
>> OK. The "TM method" is bad, but the modern computer architecture does not
>> prevent to use other methods, based on lambda-calculus, for example.

CharlieM wrote:
> Lorenzo Church developed lambda calculus, a mathematical system for
> defining computable functions. It is a model of computation equivalent in power
> to (and has the limitations of) the Turing machine.

Yes, these two models of computation are equivalent, and are _different_ ...and the Lisp is successfully used on the TM architectures.
BTW, have a look at http://www.youtube.com/watch?v=7XfA5EhH7Bc It seems it will be interesting to you

Vladimir Zyubin wrote:
>> Well, I do not know what the PTQ operators are, but it does not matter,
>> because the right question is, can the PTQ operators be implemented on the
>> modern computer architecture or cannot.

CharlieM wrote:
> PTQ is not a programming technique or language, it is a hardware language that
> can transform a process specification into hardware that can monitor and
> control that process. PTQ is a different method of specifying and implementing
> processes and process controllers.

I am sorry, I cannot understand the state "PTQ is not a programming language, it is a hardware language". What do you mean by "hardware language"? WDYM by software language? Do you mean hardware [description] language, or what? Maybe do you just mean a "programming language"?

CharlieM wrote:
> PTQ operators do not need the massive and complex computer architecture for
> support, as its operators are not "interpreted." They convey the literal
> meanings of their names, which activities are recognized or enacted in
> their corresponding hardware logic elements and architecture. Simulation of
> these operators/elements/functions by means of computer architecture and
> software would cause the loss of their benefits by orders of magnitude.

I am afraid, it looks like principles implemented in one of the first soviet PCs (1965)
http://en.wikipedia.org/wiki/Mir_(computer)

best regards, Vladimir

By Charles Moeller on 23 February, 2012 - 11:23 am

Vladimir:

CharlieM wrote:
>> PTQ is not a programming technique or language, it is a hardware language that
>> can transform a process specification into hardware that can monitor and control
>> that process. PTQ is a different method of specifying and implementing processes and process controllers.

Vladimir Zyubin wrote:
> I am sorry, I cannot understand the state "PTQ is not a programming language, it is a hardware language".
> What do you mean by "hardware language"? WDYM by software language? Do you mean
> hardware [description] language, or what? Maybe do you just mean a "programming language"?

C, Fortran, and Basic are programming languages. A programming language assumes a Turing-type machine having hardware facilities suitable to read line-by-line and execute the instructions written in that language.

Automating physical processes and their machinery can improve the costs of mass-produced products. Monitoring and controlling such machines are necessary tasks. PTQ is an alternative hardware control technology that does not use sampling, instructions, TMs, run-time software, shared resources, time-sharing, or any of the common computational means that are used to achieve process control.

PTQ is a physical process-description language (not a computer language) whose operators and functions have corresponding hardware logic elements. It does not "run" on a computer, nor is it a series of instructions. A PTQ specification results in a stand-alone logic element configuration that can replace computation as a means to monitor and control a process. If one can specify a physical process using its defined operators and functions, then one may use that process description to identify the corresponding logic elements and specify their interconnections. The resulting hardware configuration will mirror the physical process in PTQ logic as process events occur and conditions change. Any deviation from the specified process operation will cause a corrective action, a protective stop, or an alarm.

I will produce an example problem and solutions to show the difference between a computational approach and the PTQ method.

CharlieM wrote:
>> PTQ operators do not need the massive and complex computer architecture for
>> support, as its operators are not "interpreted." They convey the literal
>> meanings of their names, which activities are recognized or enacted in
>> their corresponding hardware logic elements and architecture. Simulation of
>> these operators/elements/functions by means of computer architecture and
>> software would cause the loss of their benefits by orders of magnitude.

Vladimir Zyubin wrote:
> I am afraid, it looks like principles implemented in one of the first soviet PCs (1965)
> http://en.wikipedia.org/wiki/Mir_(computer)

Not so, as the method of PTQ is non-computational.

Best regards,
CharlieM

Charles Moeller wrote:
> C, Fortran, and Basic are programming languages. A programming language assumes a
> Turing-type machine having hardware facilities suitable to read line-by-line and execute
> the instructions written in that language.

> Automating physical processes and their machinery can improve the costs of mass-produced products.
> Monitoring and controlling such machines are necessary tasks. PTQ is an alternative hardware
> control technology that does not use sampling, instructions, TMs, run-time software, shared resources,
> time-sharing, or any of the common computational means that are used to achieve process control.

> PTQ is a physical process-description language (not a computer language) whose
> operators and functions have corresponding hardware logic elements.

Vladimir Zyubin wrote:
"PTQ is not a programming language, it is a hardware language"??

> It does not "run" on a computer, nor is it a series of instructions. A PTQ specification
> results in a stand-alone logic element configuration that can replace computation as a
> means to monitor and control a process. If one can specify a physical process using its > defined operators and functions, then one may use that process description to identify
> the corresponding logic elements and specify their interconnections. The resulting
> hardware configuration will mirror the physical process in PTQ logic as process events
> occur and conditions change. Any deviation from the specified process operation will
> cause a corrective action, a protective stop, or an alarm.

You mean something like that ?: http://www.youtube.com/watch?v=P1ow5_-CNEU

Best Regards
Armin Steinhoff

By Charles Moeller on 24 February, 2012 - 3:04 pm

Armin wrote:
>You mean something like that ?:
>http://www.youtube.com/watch?v=P1ow5_-CNEU

No.

Best Regards
CharlieM

I agree with "Julie In Austin". Most problems I've dealt with in other peoples software is due to other factors rather than the technologies/capabilities of the system. My career is seemingly split between creating my own software and updating/rewriting/documenting other peoples programs. Sometimes a person will come through a company and pull one program out after another and then some poor fool is stuck maintaining it 10 years later after the original author has either found another job or got canned. It seems that some folks don't even like updating their own software and pass the buck to the next guy. Until management starts caring about technique/style/documentation/quality in software this problem will persist no matter what the technology is.

KEJR

Ken:

> I agree with "Julie In Austin". Most problems I've dealt with in other peoples software is due to other factors
> rather than the technologies/capabilities of the system.

--snip--

> Until management starts caring about technique/style/documentation/quality in software this problem will persist no
> matter what the technology is.

I also agree with both you and Julie In Austin, and an article in January 2012 Control Engineering states that "60% of all failures come from design."

The point I have been making is that the Turing paradigm (a very complex method) is not the optimum method for real time, time-critical, and safety-critical control systems.

Best regards,
CharlieM

By William Sturm on 22 February, 2012 - 8:15 am

CharlieM said:

> The point I have been making is that the Turing paradigm (a very complex method)
> is not the optimum method for real time, time-critical, and safety-critical control systems."

I think the main roadblock that you currently have is that the Turing model is all that most of us know and understand. Except for the obsolete concept of the "analog computer", which I believe is based on operational amplifiers.

As far as optimal real time digital control, the best results I have seen so far have been focused on maximizing update rates and resolution in an attempt to emulate an analog system. Think of a CD player, for instance. I think if you can get update rates and resolution sufficiently high so the output of the system looks like analog, then the rest is a "simple matter of programming".

Bill Sturm

By Vladimir E. Zyubin on 22 February, 2012 - 11:50 am

Bill Sturm wrote:
> I think if you can get update rates and resolution sufficiently high so the output
> of the system looks like analog, then the rest is a "simple matter of programming".

There is such a thing as accuracy (measurement uncertainty to be precise). The limit of derivative part of any PID algorithm, as the rate approaches infinity, is infinity. :-)

"The faster the better" idea inserted in our brains by the idiotic realtime adds is wrong. For most control tasks, current productivity of the platforms is more than enough.

best regards, Vladimir

Bill Sturm,

CharlieM said:

>> The point I have been making is that the Turing paradigm (a very complex method)
>> is not the optimum method for real time, time-critical, and safety-critical control systems."

Bill Sturm wrote:
> I think the main roadblock that you currently have is that the Turing model is all that most of us know and
> understand. Except for the obsolete concept of the "analog computer", which I believe is based on operational amplifiers.

- and chopper amplifiers and servo systems that drive trig-functions.

Bill Sturm wrote:
> As far as optimal real time digital control, the best results I have seen so far have been focused on maximizing
> update rates and resolution in an attempt to emulate an analog system.

Hence the quest for higher and higher clock rates.
Will any speed be enough?
____

I have developed a new control technology and logic that does not depend upon numbers or computation. I am having trouble finding some organization to champion it so that China, India, or Japan do not get the benefits before USA does. The following describes some of its remarkable characteristics:

A method of physical process automation is announced that can function without microprocessors, software, clocked state-machines, or register-transfer-level devices. This "natural" method directly senses and operates on the elemental signals intrinsically bound to the process. The temporal relationships of the process activities that generate the signals are defined and determined by the character and conduct of the process. In like manner, the temporal relationships between and among the signals so generated enable direct monitoring and control of the process. This method of automata is based upon the temporal relationships of the events and conditions of the process as described and managed by the operators and formalisms of PTQ.

This new automation technology operates from DC to light-speed, has parallel-concurrent functionality, does not rely upon run-time software and is real-time and continuous-time as well as frame-freeze, and is both synchronous and asynchronous. It is safer, faster, and simpler, and costs much less to design, implement, and maintain time-, safety-, and mission-critical process-control systems (than computer- and microprocessor-based designs).

This technology includes a dynamic logic with which to express the topology of change, time-domain temporal logic operators and corresponding hardware logic elements, and a method of translating natural language (English) process specifications directly to hardware (in FPGAs) to mirror, monitor, and control physical processes. A user's manual describes the technology and includes example control systems.

Who might have an interest in this next new thing?

Best regards,
CharlieM

CharlieM said:
>A method of physical process automation is announced that can function without
>microprocessors, software, clocked state-machines, or register-transfer-level
>devices. This "natural" method directly senses and operates on the elemental
>signals intrinsically bound to the process. The temporal relationships of the
>process activities that generate the signals are defined and determined by
>the character and conduct of the process. In like manner, the temporal
>relationships between and among the signals so generated enable direct
>monitoring and control of the process. This method of automata is based upon
>the temporal relationships of the events and conditions of the process as
>described and managed by the operators and formalisms of PTQ.
>
>This new automation technology operates from DC to light-speed ...

Sounds like a good old-fashioned analogue computer to me! - but light-speed?

By Charles Moeller on 23 February, 2012 - 10:39 pm

Bruce,

CharlieM said:
>>A method of physical process automation is announced that can function without
>>microprocessors, software, clocked state-machines, or register-transfer-level
>>devices. This "natural" method directly senses and operates on the elemental
>>signals intrinsically bound to the process. The temporal relationships of the
>>process activities that generate the signals are defined and determined by the
>>character and conduct of the process. In like manner, the temporal relationships
>> between and among the signals so generated enable direct monitoring and control
>> of the process. This method of automata is based upon >the temporal relationships
>> of the events and conditions of the process as

>>described and managed by the operators and formalisms of PTQ.

>>This new automation technology operates from DC to light-speed ...

Bruce wrote:
> Sounds like a good old-fashioned analogue computer to me! – but light-speed?

PTQ is digital in amplitude, analog in time (usually no clock).

PTQ is made up of primarily directly-ccnnected stimulus-response mechanisms, so there is no wait for "processing," or instructions to be carried out before a decision is made, so answers can come along at the speed of the motivating power used. That means at electron propagation speeds if in electronic logic elements, or at light speed if working with optical logic elements.

Best regards,
CharlieM

There has been too much mystery and outrageous claims in this thread. As near as I can tell, if your application can be solved with Boolean state logic and it is not beyond the capacity of a field programmable gate array, then Mr. Moeller's configuration/programming software called PTQ can be used to program the gate array. From that point on, once the Inputs and Outputs are connected to the FPGA, the system works in real time. There is nothing really new here except perhaps Mr. Moeller's programming software.

You are welcome to correct my impression.

Dick Caro

Richard H. Caro, CEO, CMC Associates
Certified Automation Professional (ISA)
Buy my books at the ISA Bookstore:
Wireless Networks for Industrial Automation
Automation Network Selection
Consumers Guide to Fieldbus Network Equipment for Process Control
===============================================================

By Charles Moeller on 28 February, 2012 - 11:13 pm

Dick Caro wrote:

> As near as I can tell, if your application can be solved with Boolean state logic
> and it is not beyond the capacity of a field programmable gate array, then Mr.
> Moeller's configuration/programming software called PTQ can be used to
> program the gate array. From that point on, once the Inputs and Outputs are
> connected to the FPGA, the system works in real time. There is nothing really
> new here except perhaps Mr. Moeller's programming software.

Dick urges that the rest of you folks just move along as there is nothing extraordinary to see here.

I will remain ready to answer questions from any individuals that have a continuing interest in my work on temporal logic.

BTW, I do not have any "programming software," at present. All my temporal logic designs are "hand-crafted" schematics composed by following the process specification and selecting the corresponding blocks from my proprietary library of special (and standard) logic elements. The resulting hardware configuration mirrors and monitors and controls the process in real time.

Best regards,
CharlieM

By Vladimir E. Zyubin on 24 February, 2012 - 5:49 am

CharlieM said:
> This new automation technology operates from DC to light-speed, has parallel-concurrent functionality, does
> not rely upon run-time software and is real-time and continuous-time as well as frame-freeze, and is both synchronous
> and asynchronous. It is safer, faster, and simpler, and costs much less to design, implement, and maintain time-,
> safety-, and mission-critical process-control systems (than computer- and microprocessor-based designs).
---- cut ---
> Who might have an interest in this next new thing?

_New_ thing is not much interesting "as is" for practical use. People, that make the decision you are looking for, are interested in the economic effect that assumes question of migration as well. IMO, now you should emphasize the profit, but difference (new principles).

If you will show that your device can easy replace a conventional PLC (e.g. Siemens S-700), and needs no additional training of the personal, and is three times cheaper, and has MTBF that is three times bigger than MTBF of the PLC, and etc., then you will find the people. No other way.

They are interested "what" they will have, "why" is a second-order question.

And maybe, it would be better to show just a niche for your solution.

A working prototype would be helpful as well.

best,
Vladimir

This is a very interesting thread , and i just started following it. I am trying to figure out how this new paradigm will work, but i have a few queries and comments about it that i want to share and discuss. from what i saw this was the most interesting part

statement 1 - "A method of physical process automation is announced that can function without microprocessors, software, clocked state-machines, or register-transfer-level devices. This "natural" method directly senses and operates on the elemental signals intrinsically bound to the process. The temporal relationships of the process activities that generate the signals are defined and determined by the character and conduct of the process. In like manner, the temporal relationships between and among the signals so generated enable direct monitoring and control of the process. This method of automata is based upon the temporal relationships of the events and conditions of the process as described and managed by the operators and formalisms of PTQ."

so what it means that your method is able to produce a exact inverse of the plant dynamics?? (based on the fact that you said "This "natural" method directly senses and operates on the elemental signals intrinsically bound to the process. The temporal relationships of the process activities that generate the signals are defined and determined by the character and conduct of the process")

or does it mean that it uses a fuzzy like set for control?? This method of automata is based upon the temporal relationships of the events and conditions of the process as described and managed by the operators and formalisms of PTQ.

i am confused between the two above statements what does it do, replicate and inverses the plant dynamics or just uses temporal relations like "if this do that" like fuzzy sets, or uses both??

Here is an example of what i understand

Let the plant dynamics (as you say elemental signals intrinsically bound to the process) can be represented by the state nonlinear equation

X(dot) = F(x) + B(x)*U

where X are the states of the system (can be any no), F(x) and B(x) represents the model (nature) of the system

does your automata give a control output such that U = Inverse(B(x)) ( -F(x) - Kx )? ie it can perfectly predict or approximate F(x) and B(x) such that when you apply this control signal to the system you will get

X(dot) = -Kx , a linear and stable system , with perfect tracking and rate of response dependent only on the set gain K??

statement 2 - "This new automation technology operates from DC to light-speed, has parallel-concurrent functionality, does not rely upon run-time software and is real-time and continuous-time as well as frame-freeze, and is both synchronous and asynchronous. It is safer, faster, and simpler, and costs much less to design, implement, and maintain time-, safety-, and mission-critical process-control systems (than computer- and microprocessor-based designs)."

when you say it operated from DC to light speed, do you mean that there is no time delay between measurement (estimation) and control?? is this possible even? Though this may sound desirable, in my humble opinion it may not be a good control option. how can disturbance be rejection be done if this is the case?

i apologise if my questions seem a little bit overboard. i am really curious to know how this automation control is implemented. as you say it has no processor based design, no software (the only option i see is a hardware which can emulate the inverse of the process dynamics)?? are you writing any paper in this regard ?? if you have already written i would really like to read it.

By Charles Moeller on 27 February, 2012 - 1:08 pm

Process Value,

I made a couple of statements:

> A method of physical process automation is announced that can function without microprocessors,
> software, clocked state-machines, or register-transfer-level devices. This "natural" method directly senses and
> operates on the elemental signals intrinsically bound to the process. The temporal relationships of the process
> activities that generate the signals are defined and determined by the character and conduct of the process. In like
> manner, the temporal relationships between and among the signals so generated enable direct monitoring and
> control of the process. This method of automata is based upon the temporal relationships of the events and
> conditions of the process as described and managed by the operators and formalisms of PTQ."


and

> This new automation technology operates from DC to light-speed, has parallel-concurrent
> functionality, does not rely upon run-time software and is real-time and continuous-time as well as frame-freeze,
> and is both synchronous and asynchronous. It is safer, faster, and simpler, and costs much less to design,
> implement, and maintain time-, safety-, and mission-critical process-control systems (than computer- and
> microprocessor-based designs).

In explanation, let us examine a simple control function, such as the direction (DIR) controller for an elevator.

The safety and operational constraints for a direction controller for a two-floor elevator is stated:

"The direction contactor (DIR) can change only when at a floor while the lift motor is off and after the door has opened and closed and the alternate floor has been requested. The alternate floor request is assessed after the constraints have been fulfilled."

Stated in PTQ English:
At Floor 1 AND Lift Motor OFF AND Door Open SEQ Closed CREATES Door Cycle Complete WHILE DCC AND Floor 2 Requested CREATES DIR WHILE At Floor 2 AND Lift Motor OFF AND Door Open SEQ Closed CREATES Door Cycle Complete WHILE DCC AND Floor 1 Requested CREATES /DIR WHILE LM Resets DCC WHILE [(LM AND DIR) = UP] WHILE [(LM AND /DIR) = DN]

Condensed:
{[FL1 * /LM * (DO SEQ DC)] # DCC} [(DCC * FL2Req) # DIR] [(DIR * LM) = UP] {[FL2 * /LM * (DO SEQ DC)] # DCC} [(DCC * FL1Req) # /DIR] [(/DIR * LM) = DN]

The real-time logic of this part of an elevator controller can be stated in one continuous line of PTQ parallel-concurrent "source code" and implemented in functional hardware having seven inputs and one output in a configuration of less than 50 equivalent gates. This controller operates at the speed of the process and with a latency of only six gate-delays in the directly-connected logic (in whatever device technology used). Temporal discrimination is less than one gate-delay.

Q: How many lines of linear-sequential code and equivalent gates would be used in a TM-type controller?

Best regards,
CharlieM

By Vladimir E. Zyubin on 28 February, 2012 - 4:00 am

CharlieM wrote:

> Stated in PTQ English:
> At Floor 1 AND Lift Motor OFF AND Door Open SEQ Closed CREATES Door Cycle
> Complete WHILE DCC AND Floor 2 Requested CREATES DIR WHILE At Floor 2 AND Lift
> Motor OFF AND Door Open SEQ Closed CREATES Door Cycle Complete WHILE DCC
> AND Floor 1 Requested CREATES /DIR WHILE LM Resets DCC WHILE [(LM AND DIR) = UP]
> WHILE [(LM AND /DIR) = DN]

It seems, it is too long to be acceptable for the short-term memory. In practice we use points, and semicolons, and commas to divide any complex information to sentences. Just a psychology limit. Programming is mostly psychology, humans write programs for humans only. :-) From the languages we need ability to structurize info, ability to hierarchicalize info, ability to metaphorize info, ability to isolate one part from other. So, I have my doubts that the elevator designer thinks according to the formula.

And elevator is too complex object. What about other challenge -- a washroom hand dryers? Just one input (hand sensor) and one output (phene control). The solution assumes a bit of temporal logic because of unstable hands position that leads to sensor jitter.

By Charles Moeller on 28 February, 2012 - 11:08 am

Vladimir,

CharlieM wrote:

>> Stated in PTQ English:
>> At Floor 1 AND Lift Motor OFF AND
>Door Open SEQ Closed CREATES Door Cycle
>> Complete WHILE DCC AND Floor 2
>Requested CREATES DIR WHILE At Floor 2
>AND Lift

---- snip ----

Vladimir Zyubin wrote:
> It seems, it is too long to be acceptable for the short-term memory. In practice we use points, and semicolons,
> and commas to divide any complex information to sentences. Just a psychology limit.

---- snip ----

Sorry my language doesn't meet your approval. You must keep in mind that PTQ is primarily a dynamic logic and language, rather than a (static) linear-sequential one, so it may appear somewhat strange, at first.

The punctuation in PTQ resides in the temporal operators, in the case given: WHILE, which can be implied. For example:

"At Floor 1 AND Lift Motor OFF AND Door Open SEQ Closed CREATES Door Cycle Complete" is one complete thought and activity.

> And elevator is too complex object.

I selected the elevator example as one having several safety concerns and constraints, but to keep it simple, only treated the DIR part of the whole controller.

Best regards,
CharlieM

By Vladimir E. Zyubin on 28 February, 2012 - 11:44 pm

CharlieM wrote:
>>> Stated in PTQ English: At Floor 1 AND Lift Motor OFF AND Door Open SEQ Closed CREATES Door Cycle
>>> Complete WHILE DCC AND Floor 2 Requested CREATES DIR WHILE At Floor 2 AND Lift

Vladimir Zyubin wrote:
>> It seems, it is too long to be acceptable for the short-term memory.

CharlieM wrote:
>Sorry my language doesn't meet your approval. You must keep in mind that PTQ is primarily a dynamic logic and
>language, rather than a (static) linear-sequential one, so it may appear somewhat strange, at first.

I am sorry, it is not my approval. It is a restriction from the software psychology. Have a look at Ben Shneiderman's "Software Psychology: Human Factors in Computer and Information Systems", Little, Brown and Co. 1980 (sic!).

I do not know, could the language grammatics be transformed in the acceptable form or not, but it have to be done if it is intended to be used by a human being.

BTW, It seems to me, after the "Door Cycle Complete" event has appeared, I (as a passenger) cannot open door again at the same floor. Is it correct? Or I can, but the mechanism also can begin to move (to make a cutlet from me :-)

best regards, Vladimir

By Charles Moeller on 1 March, 2012 - 1:09 am

Vladimir:

CharlieM wrote:
>> Sorry my language doesn't meet your approval. You must keep in mind that PTQ is primarily a dynamic logic and
>> language, rather than a (static) linear-sequential one, so it may appear somewhat strange, at first.

Vladimir Zyubin wrote:
> I am sorry, it is not my approval. It is a restriction from the software psychology. Have a look at Ben
> Shneiderman's "Software Psychology: Human Factors in Computer and Information Systems", Little, Brown and Co. 1980 (sic!).

You misunderstand. PTQ is not software. It is a means of converting functional specifications, including safety and operations, to real time hardware that performs immediately according to the specifications.

> BTW, It seems to me, after the "Door Cycle Complete" event has appeared, I (as a passenger) cannot open door again
> at the same floor. Is it correct?

Not correct. This issue is treated in the autonomous door motor section of the elevator control system.

> Or I can, but the mechanism also can begin to move (to make a cutlet from me :-)

There are major safety constraints for the operation of the door motor, the direction contactor, and the lift motor. I have shown only the cycles of activities and constraints for the operation of the direction contactor.

As previously stated, direction can't change unless the lift motor is not running and unless the door cycle is complete (with the door closed).

Best regards,
CharlieM

By Vladimir E. Zyubin on 1 March, 2012 - 11:05 am

Charlie:

CharlieM wrote:
>>> Sorry my language doesn't meet your approval. You must keep in mind that PTQ is primarily a dynamic logic and
>>> language, rather than a (static) linear-sequential one, so it may appear somewhat strange, at first.

Vladimir Zyubin wrote:
>> I am sorry, it is not my approval. It is a restriction from the software psychology. Have a look at Ben
>> Shneiderman's "Software Psychology: Human Factors in Computer and Information Systems", Little, Brown and Co. 1980 (sic!).

CharlieM wrote:
> You misunderstand. PTQ is not software. It is a means of converting functional specifications, including safety and
> operations, to real time hardware that performs immediately according to the specifications.

I do understand that PTQ is not a software. Also I understand that the PTQ formalism is just a formal language intended to specify a control algorithm.

And please do not concentrate your attention on the word "software", please pay attention to the word "psychology".

Vladimir wrote:
>> BTW, It seems to me, after the "Door Cycle Complete" event has appeared, I (as a passenger) cannot open door again
>> at the same floor. Is it correct?

CharlieM wrote:
> Not correct. This issue is treated in the autonomous door motor section of the elevator control system.

I suspect the parallelism you mention as a advantage will play (sometimes) a malicious joke in the case. It will lead to transformation passenger to forcemeat.

Vladimir wrote:
>> Or I can, but the mechanism also can begin to move (to make a cutlet from me :-)

CharlieM wrote:
> There are major safety constraints for the operation of the door motor, the direction contactor, and the lift motor.
> I have shown only the cycles of activities and constraints for the operation of the direction contactor.

> As previously stated, direction can't change unless the lift motor is not running and unless the door cycle is
> complete (with the door closed).

Which way can I verify safety of the algorithm descripted by means of PTQ formalism? As I write previously I suspect the PTQ formalism (with a parallel implementation) will lead to possibility the following parallel operations:
- lift motor running;
- door opening.

best regards, Vladimir

By Charles Moeller on 1 March, 2012 - 6:40 pm

Vladimir,

CharlieM wrote:
>> You misunderstand. PTQ is not software. It is a means of converting functional specifications, including safety and
>> operations, to real time hardware that performs immediately according to the specifications.

Vladimir Zyubin wrote:
> I do understand that PTQ is not a software. Also I understand that the PTQ formalism is just a formal language
> intended to specify a control algorithm.

No. PTQ does not lead to an "algorithm," which is (from Webster's): "a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly: a step-by-step procedure for solving a problem or accomplishing some end especially by a computer."

PTQ replaces linear-sequential algorithms with reactive circuits that are real-time and parallel-concurrent and which respond immediately rather than after fetch and decode, search and match, instruction, clock, interrupt, or executive loop times.

> And please do not concentrate your attention on the word "software", please pay attention to the word "psychology".

I prefer not to leave considerations of safety up to "psychology." The best way to ensure safety is to use hardware lockouts, which are simple hardware interlocks (if this, not that).

Safety constraint 1. Door is prevented from opening if not at a floor or while lift motor is running.

This constraint is satisfied in my elevator system by also preventing power to the door motor at any time that power is applied to the lift motor. This is a physical safety interlock, not a software, mental or psychological safeguard.

Safety constraint 2. Lift motor direction is not allowed to change while lift motor is running.

This constraint in my elevator system is satisfied by preventing change in the operating condition of the direction contactor while power is being applied to the lift motor. This is also a physical safety interlock, not a software, mental or psychological safeguard.

Safety constraint 3. Lift motor is prevented from running while door is not closed.

This constraint in my elevator system is satisfied by preventing power from being applied to the lift motor unless the door is closed. This is also a physical safety interlock, not a software, mental, or psychological safeguard.

> I suspect the parallelism you mention as a advantage will play (sometimes) a malicious joke in the case. It will lead
> to transformation passenger to forcemeat.

Not true. See above safety constraints.

> Which way can I verify safety of the algorithm descripted by means of PTQ formalism?

By the specifications and by inspection and checking that the hardware is implemented according to the specifications, exactly.

> As I write previously I suspect the PTQ formalism (with a parallel implementation) will lead to
> possibility the following parallel operations:
> - lift motor running;
> - door opening.

The physical interlocks specified and implemented to meet the safety constraints prevent your feared conjunction in two ways. If it were left to software, I would be very afraid. In this case, it is hardware interlocks, implemented by:

Constraint 1. Door is prevented from opening if lift motor is running (if the elevator is moving power is not available to open door), and

Constraint 3. Lift motor is prevented from running if door is not closed (power is removed from lift motor if door opens).

In both cases, door open and elevator moving can't take place at the same time. Your concerns are without basis.

Best regards,
CharlieM

By Steinhoff on 2 March, 2012 - 3:44 am

[ clip]
Vladimir Zyubin wrote:
>> I do understand that PTQ is not a software. Also I understand that the PTQ formalism is just a formal language
>> intended to specify a control algorithm.

CharlieM wrote:
> No. PTQ does not lead to an "algorithm,"

PTQ is a declarative "language" (?) which specifies in general an algorithm in order to control something! This specification includes also the creation of modules and parallel threads or processes.

The implementation could be done with programmable hardware if the spec is compilable to VHDL, e.g.

Best Regards
Armin Steinhoff

By Charles Moeller on 2 March, 2012 - 12:12 pm

Armin,

[ clip]
Vladimir Zyubin wrote:
>>> I do understand that PTQ is not a software. Also I understand that the PTQ formalism is just a formal language
>>> intended to specify a control algorithm.

>CharlieM wrote:
>> No. PTQ does not lead to an
>"algorithm,"

Armin Steinhoff wrote
> PTQ is a declarative "language" (?) which specifies in general an algorithm in order to control something! This
> specification includes also the creation of modules and parallel threads or processes.

PTQ is not a computer language. It is a behavioral description language that may be directly implemented in hardware as a configuration of logic gates.

One writes the physical process description (to be monitored and controlled) as accurately as possible using the operators available in PTQ. The process statement so created is the schematic and architecture of the resulting hardware logic element configuration.

So we have process specification-to-hardware controller (via schematic entry) in one step.

Armin Steinhoff wrote
> The implementation could be done with programmable hardware if the spec is compliable to VHDL, e.g.

PTQ descriptions can be translated into VHDL or stated in VHDL, but that is (usually) an unneeded complication because the PTQ logic element library fully defines the structure (in schematic form) of each of the roster of operators and corresponding hardware logic elements from which the designer can choose. Individual characteristics of each logic element (gate delay, output drive, etc.) will have values typical of the device generation being used (e.g., pz5032cs6a44, a Xilinx CPLD that was available in the year 2000).

PTQ constructions behave more similar to a dataflow, than an algorithmic model, although signals are typically not registered. PTQ has temporal logic elements with an inherent sense of time rather than being dependent upon a TM and software to tell it the time.

Best regards,
CharlieM

By Steinhoff on 2 March, 2012 - 4:09 pm

[ clip]
Armin Steinhoff wrote
>> PTQ is a declarative "language" (?) which specifies in general an algorithm in order to control something! This
>> specification includes also the creation of modules and parallel threads or processes.

CharlieM wrote:
> PTQ is not a computer language. It is a behavioral description language that may be directly implemented in hardware as a configuration of logic gates.

PTQ is a language directly implemented in hardware? How can you implement a language in such way ?
I can define a language by a formal syntax and semantic ...

> One writes the physical process description (to be monitored and controlled) as accurately as possible using the operators available in PTQ.

A process description as set of Lego Mindstorm modules ?? How ever, your statements are very confusing.

> The process statement so created is the schematic and architecture of the resulting hardware logic element configuration.

> So we have process specification-to-hardware controller (via schematic entry) in one step.

Armin Steinhoff wrote
>> The implementation could be done with programmable hardware if the spec is compliable to VHDL, e.g.
> PTQ descriptions can be translated into VHDL or stated in VHDL, but that is (usually) an unneeded complication

The translation is absolutely necessary for the synthesis of the FPGAs.

> because the PTQ logic element library fully defines the structure (in schematic form) of each of the roster of operators and corresponding hardware logic elements from which the designer can choose. Individual characteristics of each logic element (gate delay, output drive, etc.) will have values typical of the device generation being used (e.g., pz5032cs6a44, a Xilinx CPLD that was available in the year 2000).

It's now 2012 and there are FPGAs with millions of gates.

> PTQ constructions behave more similar to a dataflow, than an algorithmic model,

When I correctly understand ... PTQ defines also processes, but they are based on algorithm.

> although signals are typically not registered. PTQ has temporal logic elements with an inherent sense of time

In the moment I don't see any temporal elements in PTQ. Do you have a formal definition of PTQ and its semantic?

That are my last 2 cents ...

Best Regards
Armin Steinhoff

By Charles Moeller on 2 March, 2012 - 7:52 pm

Armin,

CharlieM wrote:
>> PTQ is not a computer language. It is a behavioral description language that may be directly implemented in hardware
> as a configuration of logic gates.

Armin Steinhoff wrote:
> PTQ is a language directly implemented in hardware? How can you implement a language in such way ?
> I can define a language by a formal syntax and semantic ...

Yes. I can also define a language by the names of its operators and their corresponding logic element schematics in a library.

CharlieM wrote:
>> One writes the physical process description (to be monitored and controlled) as accurately as possible
>> using the operators available in PTQ.

>> The process statement so created is the schematic and architecture of the resulting hardware logic element configuration.

>> So we have process specification-to-hardware controller (via schematic entry) in one step.

Armin Steinhoff wrote:
>>> The implementation could be done with programmable hardware if the spec is compliable to VHDL, e.g.
>>> PTQ descriptions can be translated into VHDL or stated in VHDL, but that is (usually) an unneeded complication

Armin Steinhoff wrote:
> The translation is absolutely necessary for the synthesis of the FPGAs.

One of the options for design entry is schematic capture , described in a Xilinx CPLD document (see below).

I quote from http://www.xilinx.com/publications/products/cpld/cpld_applications_handbook.pdf:

"Schematic capture is the traditional method that designers have used to specify gate arrays and programmable logic devices. It is a graphical tool that allows you to specify the exact gates required and how you want them connected. There are four basic steps to using schematic capture:

1. After selecting a specific schematic capture tool and device library, begin building the circuit by loading the desired gates from the selected library. You can use any combination of gates that you need. You must choose a specific vendor and device family library at this time, but you don't yet have to know what device within that family you will ultimately use with respect to package and speed.

2. Connect the gates together using nets or wires. You have complete control and can connect the gates in any configuration required by your application.

3. Add and label the input and output buffers. These will define the I/O package pins for the device.

4. Generate a netlist. A netlist is a text equivalent of the circuit. It is generated by design tools such as a schematic capture program. The netlist is a compact way for other programs to understand what gates are in the circuit, how they are connected, and the names of the I/O pins. ..."

Other options allow the designer to add new library logic elements by specifying the kind and interconnections of gates. It is this option that allows me to specify unique groupings of logic gates that perform or recognize the desired temporal characteristics fundamental to real time processes and embed those in the controller.

CharlieM wrote:
>> because the PTQ logic element library fully defines the structure (in schematic form) of each of the roster of
>> operators and corresponding hardware logic elements from which the designer can choose. Individual characteristics
>> of each logic element (gate delay, output drive, etc.) will have values typical of the device generation being
>> used (e.g., pz5032cs6a44, a Xilinx CPLD that was available in the year 2000).

Armin Steinhoff wrote:
> It's now 2012 and there are FPGAs with millions of gates.

Most of which are not needed for the simple controllers I have been writing about. Why use more than the minimum that will guarantee function and safety? More gates and more software will only increase the risk of faults.

CharlieM wrote:
>> PTQ constructions behave more similar to a dataflow, than an algorithmic model,

Armin Steinhoff wrote:
> When I correctly understand ... PTQ defines also processes, but they are based on algorithm.

PTQ describes real time processes not based upon algorithms. PTQ uses a non-registered real time dataflow-type method.

CharlieM wrote:
>> although signals are typically not registered. PTQ has temporal logic elements with an inherent sense of time

Armin Steinhoff wrote:
> In the moment I don't see any temporal elements in PTQ. Do you have a formal definition of PTQ and its semantic?

The temporal logic in the DIR contactor controller that I described was specified by "(DO SEQ DC)." This means "the Sequence: Door Open, Door Closed" A specified sequence qualifies as an identifiable temporal characteristic or quality.

Best regards,
CharlieM

By Steinhoff on 3 March, 2012 - 4:04 am

[ clip]
Armin Steinhoff wrote:
>> PTQ is a language directly implemented in hardware? How can you implement a language in such way ?
>> I can define a language by a formal syntax and semantic ...

CharlieM wrote:
> Yes. I can also define a language by the names of its operators and their corresponding logic element schematics in a library.

That's not the sufficient way to define a formal language.

[clip]
Armin Steinhoff wrote:
>> The translation is absolutely necessary for the synthesis of the FPGAs.

CharlieM wrote:
> One of the options for design entry is schematic capture , described in a Xilinx CPLD document (see below).

> I quote from http://www.xilinx.com/publications/products/cpld/cpld_applications_handbook.pdf:

So you would translate PTQ manually to the function block display representation ? Really ??

Armin Steinhoff wrote:
>> In the moment I don't see any temporal elements in PTQ. Do you have a formal definition of PTQ and its semantic?

CharlieM wrote:
> The temporal logic in the DIR contactor controller that I described was specified by "(DO SEQ DC)." This means "the Sequence: Door Open, Door Closed"

Sorry this has nothing to do with temporal logic ...IMHO.

Best Regards
Armin Steinhoff

By Charles Moeller on 3 March, 2012 - 1:12 pm

Armin,

[ clip]
Armin Steinhoff wrote:
> So you would translate PTQ manually to the function block display representation? Really??

Yes, really.
1. I write the physical process specification in terms of the PTQ operators, for which there exists a dictionary of definitions together with the corresponding hardware logic elements.

2. When all parts of the physical process have been described in PTQ terms, and the syntactical rules have been correctly followed, the design may be rewritten as a schematic of the process controller. The schematic has selected logic elements that correspond to the specification and the logic elements are interconnected according to the architecture specific to that controller. The architecture "emerges" from the specification.

3. The schematic is instantiated in the target CPLD or FPGA.

4. A test procedure is then written and the logic assembly is tested for correct function including safety measures.

5. Any design faults found are corrected and testing is resumed. This continues until the hardware and specification agree, all functions can be fulfilled, and all safety measures are upheld.

At this point, the design can be said to be complete and ready for initial field testing.

Armin Steinhoff wrote:
>>> In the moment I don't see any temporal elements in PTQ. Do you have a formal definition of PTQ and its semantic?

CharlieM wrote:
>> The temporal logic in the DIR contactor controller that I described was specified by "(DO SEQ DC)." This
>> means "the Sequence: Door Open, Door Closed"

Armin Steinhoff wrote:
> Sorry this has nothing to do with temporal logic ...IMHO.

The reason you don't recognize sequence in time rather than sequence in space (e.g., numerical sequence) as having to do with temporal effects is that all so-called temporal logic in common use (everything with which you are familiar) is built in the space domain, through the "magic" of TMs, software, and numbers (or numbered spaces). My temporal logic is a different thing as it exists in and for the time domain and needs no conversion to and from the space domain. It is therefore more effective, efficient, and appropriate to real time process control.

Best regards,
CharlieM

By Vladimir E. Zyubin on 3 March, 2012 - 9:06 am

CharlieM wrote:
>>> You misunderstand. PTQ is not software. It is a means of converting functional specifications, including safety and
>>> operations, to real time hardware that performs immediately according to the specifications.

Vladimir Zyubin wrote:
>> I do understand that PTQ is not a software. Also I understand that the PTQ formalism is just a formal language
>> intended to specify a control algorithm.

CharlieM wrote:
> No. PTQ does not lead to an "algorithm," which is (from Webster's): "a procedure for solving a mathematical
> problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves
> repetition of an operation; broadly: a step-by-step procedure for solving a problem or accomplishing some end
> especially by a computer."

Alas, I see the Websters's brainpeckers don't know about control algorithms that have infinite number of steps because of unpredictable long run-time. They speak about algorithms for calculation only.

So, please have a look at http://en.wikipedia.org/wiki/Algorithm especially at the part called "Implementation".

Vladimir Zyubin wrote:
>> And please do not concentrate your attention on the word "software", please pay attention to the word "psychology".

CharlieM wrote:
> I prefer not to leave considerations of safety up to "psychology." The best way to ensure safety is to use hardware
> lockouts, which are simple hardware interlocks (if this, not that).

I must confess it seems to me you practice very disputable approach because errors in algorithms are made by a programmer (or designer if we speak of hardware implementation) not by a computer.

CharlieM wrote:
> Safety constraint 1. Door is prevented from opening if not at a floor or while lift motor is running.

> This constraint is satisfied in my elevator system by also preventing power to the door motor at any time that power
> is applied to the lift motor. This is a physical safety interlock, not a software, mental or psychological safeguard.

> Safety constraint 2. Lift motor direction is not allowed to change while lift motor is running.

> This constraint in my elevator system is satisfied by preventing change in the operating condition of the direction
> contactor while power is being applied to the lift motor. This is also a physical safety interlock, not a
> software, mental or psychological safeguard.

> Safety constraint 3. Lift motor is prevented from running while door is not closed.

> This constraint in my elevator system is satisfied by preventing power from being applied to the lift motor unless
> the door is closed. This is also a physical safety interlock, not a software, mental, or psychological safeguard.

Thank you very much. You wrote exactly those thing that I expect to see.
So again:
Your constraints do not prevent PARALLEL start of execution of the following commands:
"Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND "lift motor is not running" AND "the elevator is on a floor".

> In this case, it is hardware interlocks, implemented by:
>
> Constraint 1. Door is prevented from opening if lift motor is running (if the elevator is moving power is not
> available to open door), and
>
> Constraint 3. Lift motor is prevented from running if door is not closed (power is removed from lift motor if
> door opens).
>
> In both cases, door open and elevator moving can't take place at the same time. Your concerns are without basis.

See above remarks.

best regards, Vladimir.

Vladimir,
---- clip ----
Vladimir Zyubin wrote:
> Alas, I see the Websters's brainpeckers don't know about control algorithms that have infinite number of steps because of
> unpredictable long run-time. They speak about algorithms for calculation only.

> So, please have a look at http://en.wikipedia.org/wiki/Algorithm especially at the part called "Implementation".

I did, but Vladimir, you should have a look, also:
"In mathematics and computer science, an algorithm i/ˈælɡərɪðəm/ (from Algoritmi, the Latin form of Al-Khwārizmī) is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning.

More precisely, an algorithm is an effective method expressed as a finite list [1] of well-defined instructions [2] for calculating a function. [3] Starting from an initial state and initial input (perhaps empty), [4] the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing "output" [6] and terminating at a final ending state. …"

Vladimir Zyubin wrote:
>>> And please do not concentrate your attention on the word "software", please pay attention to the word "psychology".

>CharlieM wrote:
>> I prefer not to leave considerations of safety up to "psychology." The best way to ensure safety is to use hardware
>> lockouts, which are simple hardware interlocks (if this, not that).

Vladimir Zyubin wrote:
> I must confess it seems to me you practice very disputable approach because errors in algorithms are made by
> a programmer (or designer if we speak of hardware implementation) not by a computer.

--snip--

> Thank you very much. You wrote exactly those thing that I expect to see. So again:
> Your constraints do not prevent PARALLEL start of execution of the following commands:
> "Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND "lift motor is not running" AND "the elevator
> is on a floor".

Perhaps you are thinking like a computer and are subject to the same types of faults or it may be the case that you do not understand the term "hardware interlock."

I will illuminate: A hardware interlock is designed and implemented so that, for instance, if an equipment high voltage cover is removed, the power is interrupted, thus removing the high voltage hazard that would otherwise be exposed by removing the cover. It is a case of this (cover removal), not that (high voltage present). This type of function is also called a safety interlock.

Another example of a hardware interlock (and is the one used in my elevator controller) is the arrangement of a power contactor or "switchgear" that can send power to one and only one of two places: a) to the door motor, or b) to the lift motor. The contact arrangement for this type of switchgear is Form C, or SPDT (single pole double throw) or equivalent. Power comes in on the common line (C) and is directed to either (but not both) the A contact (normally open) or the B contact (normally closed).

Your "parallel execution" (thinking like a computer?) does not happen.

Best regards,
CharlieM

By Steinhoff on 4 March, 2012 - 2:58 am

> ---- clip ----
Vladimir Zyubin wrote:
>> Alas, I see the Websters's brainpeckers don't know about control algorithms that have infinite number of steps because of
>> unpredictable long run-time. They speak about algorithms for calculation only.
>> So, please have a look at http://en.wikipedia.org/wiki/Algorithm especially at the part called "Implementation".

CharlieM wrote:
> I did, but Vladimir, you should have a look, also:
> "In mathematics and computer science, an algorithm i/ˈælɡərɪðəm/ (from Algoritmi, the Latin form of Al-Khwārizmī)
> is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning.

> More precisely, an algorithm is an effective method expressed as a finite list [1] of well-defined instructions
> [2] for calculating a function. [3] Starting from an initial state and initial input (perhaps empty), [4] the instructions
> describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states,
> eventually producing "output" [6] and terminating at a final ending state. …"

Algoritm including also endless loops ... e.g. the algorithm of an finit automata. You know it as a PLC programmer.

Also FPGAs are processing step by step ... that means clock by clock. Every clock cycle is a processing cycle and the FPGA is simply just a specialized CPU.

Best Regards
Armin Steinhoff

Armin,

---- clip ----
Armin Steinhoff wrote:
> Algoritm including also endless loops ... e.g. the algorithm of an finit automata. You know it as a PLC programmer.

> Also FPGAs are processing step by step ... that means clock by clock. Every clock cycle is a processing cycle and
> the FPGA is simply just a specialized CPU.

--Can be, but is not necessarily so.

There are non-clocked circuits that are hand-crafted for a specific purpose by means of designer inspiration. We used to call these "random logic," or "glue logic." Today, these inventions can be easily configured in "sea-of-gates" FPGAs. It is this class of circuit and designs that are used in PTQ configurations.

Best regards,
CharlieM

By Steinhoff on 4 March, 2012 - 5:45 pm

Armin Steinhoff wrote:
>> Algoritm including also endless loops ... e.g. the algorithm of an finit automata. You know it as a PLC programmer.
>> Also FPGAs are processing step by step ... that means clock by clock. Every clock cycle is a processing cycle and
>> the FPGA is simply just a specialized CPU.

CharlieM wrote:
> --Can be, but is not necessarily so.

> There are non-clocked circuits that are hand-crafted for a specific purpose by means of designer inspiration. We used
> to call these "random logic," or "glue logic." Today, these inventions can be easily configured in "sea-of-gates" FPGAs.

Every sea of gates are working clocked ...

Best Regards
Armin Steinhoff

Armin Steinhoff wrote:
--snip--
>>> Also FPGAs are processing step by step ... that means clock by clock. Every clock cycle is a processing cycle and
>>> the FPGA is simply just a specialized CPU.

CharlieM wrote:
>> --Can be, but is not necessarily so.

>> There are non-clocked circuits that are hand-crafted for a specific purpose by means of designer inspiration. We used
>> to call these "random logic," or "glue logic." Today, these inventions can be easily configured in "sea-of-gates" FPGAs.

Armin Steinhoff wrote:
> Every sea of gates are working clocked

Not so.
The referenced Xilinx document below confirms my experience that each macrocell can be used as a non-registered combinatorial function. That means an option of NO CLOCKS, no registers.

See http://www.xilinx.com/support/documentation/data_sheets/ds012.pdf page 4:

"Figure 5 shows the architecture of the macrocell used in the CoolRunner XPLA3 CPLD. Any macrocell can be reset or preset on power-up. Each macrocell register can be configured as a D-, T-, or Latch-type flip-flop, or bypassed if the macrocell is required as a combinatorial logic function."

Best regards,
CharlieM

By Steinhoff on 5 March, 2012 - 8:26 am

Armin Steinhoff wrote:
>> Every sea of gates are working clocked

Charles wrote:
> Not so.The referenced Xilinx document below confirms my experience that each macrocell can be
> used as a non-registered combinatorial function. That means an option of NO CLOCKS, no registers.

I was talking about a sea-of-gates of normal LUT based FPGAs ... and not macrocell based CPLDs.

So you want to build manually your algorithm within CPLDs with asynchronous working macro cell ? I can only say: have fun :)

Best Regards
Armin Steinhoff

By Vladimir E. Zyubin on 4 March, 2012 - 7:15 am

---- clip ----
Vladimir Zyubin wrote:
>> Alas, I see the Websters's brainpeckers don't know about control algorithms that have infinite number of steps because of
>> unpredictable long run-time. They speak about algorithms for calculation only.

>> So, please have a look at http://en.wikipedia.org/wiki/Algorithm especially at the part called "Implementation".

CharlieM wrote:
> I did, but Vladimir, you should have a look, also: "In mathematics and computer science, an algorithm
> i/ˈælɡərɪðəm/ (from Algoritmi, the Latin form of Al-Khwārizmī) is a step-by-step procedure for calculations.
> Algorithms are used for calculation, data processing, and automated reasoning.

> More precisely, an algorithm is an effective method expressed as a finite list [1] of well-defined instructions
> [2] for calculating a function. [3] Starting from an initial state and initial input (perhaps empty), [4] the
> instructions describe a computation that, when executed, will proceed through a finite [5] number of
> well-defined successive states, eventually producing "output" [6] and terminating at a final ending state. …"

There are a lot of bad and misleading definitions we can find in the web, I do understand it. And the bad definitions can be transformed even in a more worse form as you just have shown it. I do understand it as well. Really, I can not catch why you insist that control algorithms are not algorithms.

Vladimir Zyubin wrote:
>> Thank you very much. You wrote exactly those thing that I expect to see. So again:
>> Your constraints do not prevent PARALLEL start of execution of the following commands:
>> "Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND "lift motor is not running" AND "the
>> elevator is on a floor".

CharlieM wrote:
> Perhaps you are thinking like a computer and are subject to the same types of faults or it may be the case
> that you do not understand the term "hardware interlock."

I am sorry for the quote: "The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity." Computer cannot think, it can execute only.

As to the problem.

Parallel start of execution of "Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND"lift motor is not running" AND "the elevator is on a floor" is in full accordance with your constraints, i. e. the parallel start do not violate the restrictions.

Please do not reveal your secret about implementation. Please refrain from wordy essay about hardware interlocking... just explain (if you can) where the above statement logically violates your own constraints.

As to me the answer is clear. What about your opinion?

Best regards, Vladimir

---- clip ----
Vladimir Zyubin wrote:
> As to the problem.

> Parallel start of execution of "Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND"lift motor is
> not running" AND "the elevator is on a floor" is in full accordance with your constraints, i. e. the parallel start do
> not violate the restrictions.

> just explain (if you can) where the above statement logically violates your own constraints.

> As to me the answer is clear. What about your opinion?

Either your statements above are inconsistent or I have misinterpreted what you wrote.
You have: "Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND "lift motor is not running" AND "the elevator is on a floor"

Logically, you have lift motor (running and not running) and also (door open and closed). These pairs are mutually exclusive conditions (at the same time) but may hold at different times. Your statements are not consistent with my stated constraints. You either do not have a valid objection or you have not stated your objection clearly enough.

Please advise.

Best regards,
CharlieM

By Vladimir E. Zyubin on 5 March, 2012 - 4:34 am

Vladimir Zyubin wrote:
>> As to the problem.

>> Parallel start of execution of "Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND"lift motor is
>> not running" AND "the elevator is on a floor" is in full accordance with your constraints, i. e. the parallel start do
>> not violate the restrictions.
>
>> just explain (if you can) where the above statement logically violates your own constraints.
>
>> As to me the answer is clear. What about your opinion?

CharlieM wrote:
> Either your statements above are inconsistent or I have misinterpreted what you wrote.

> You have: "Lift motor running" AND "Door opening" WHILE BOTH "door is closed" AND "lift motor is not running"
> AND "the elevator is on a floor"

> Logically, you have lift motor (running and not running) and also (door open and
> closed). These pairs are mutually exclusive conditions (at the same time) but may hold at different times. Your
> statements are not consistent with my stated constraints. You either do not have a valid objection or you have not
> stated your objection clearly enough.

> Please advise.

I think the problem (cognitive dissonance) is in denotation of the sentence "Lift motor running". I interpret it as "to switch ON the lift motor contactor". And it seems to me you interpret it in some other way.

I make very simple (trivial) conclusion: parallel implementation can lead to parallel switching ON both the lift motor contactor and the door open motor contactor.

And I thought such a trivial conclusion needs no explanations at all. But it seems it is not the case.

So, again, please imagine the state of your system: "door is closed" AND "lift motor is not running" AND "the elevator is on a floor" (the values of the signals are ON, OFF, ON).

In this state you can (according to your restrictions and descriptions):
1. switch ON the lift motor contactor;
2. switch ON the door open motor contactor.

All I say is IN YOUR PARALLEL IMPLEMENTATION THESE BOTH EVENTS CAN HAPPEN IN PARALLEL (at the same time).

The problem I told about is close to the race condition problem:
http://en.wikipedia.org/wiki/Race_condition

best regards, Vladimir

---- clip ----
Vladimir Zyubin wrote:
> Really, I can not catch why you insist that control algorithms are not algorithms.

Please, Vladimir if you will, consider that the following three statements [a), b), c)] are true:

a) Algorithms are step-by-step procedures, whether or not they are applied to control situations.
b) All problems to be solved are not necessarily control problems.

c) Some control problems are solvable by other than step-by-step (algorithmic) procedures. (Analog functions [f(x) = 2x], [f(x) = sin x], [f(x) = x^2] are examples.)

An algorithmic procedure for determining the temporal order in which two events (A and B) occur requires the signal from A to be repetitively sampled and the signal from B to be repetitively sampled. Whenever B occurs, a time-stamp is recorded in a designated location and B-sampling is suspended. Whenever A occurs, a time stamp for it is recorded in a different location and A-sampling is suspended. When both designated locations have time stamps, they are differenced. The sign of the difference is used to determine the order of events. This is a typical way for event order to be determined by computation.

A non-algorithmic method of determining the same answer with a lot less difficulty is found in the PTQ logic system, in which a special logic element with permanent connections to the monitored signal sources determines, after a reset, the order of the two received signals immediately upon receiving the second of the two signals.

The PTQ system has a number of special temporal and spatio-temporal logic elements that operate in a non-algorithmic way. That's why collecting their operation(s) under the term "algorithm" is inappropriate. Algorithms are for computation. PTQ methods do not necessarily include or imply computation.

Best regards,
CharlieM

By Vladimir E. Zyubin on 5 March, 2012 - 4:57 am

Vladimir Zyubin wrote:
>> Really, I can not catch why you insist that control algorithms are not algorithms.

Charlie wrote:
> Please, Vladimir if you will, consider that the following three statements [a), b), c)] are true:

> a) Algorithms are step-by-step procedures, whether or not they are applied to control situations.
> b) All problems to be solved are not necessarily control problems.

> c) Some control problems are solvable by other than step-by-step (algorithmic)
> procedures. (Analog functions [f(x) = 2x], [f(x) = sin x], [f(x) = x^2] are examples.)

I will, why not... and it is absolutely clear to me that calculation of "sin x" is just a step, despite of its implementation (digital, analog, mechanical, graphical, etc... even if it was got directly from an egregore... esotirec implementation :-)

Vladimir,

You wrote:
>>> Really, I can not catch why you insist that control algorithms are not algorithms.

Charlie wrote:
>> Please, Vladimir if you will, consider that the following three statements [a), b), c)] are true:

>> a) Algorithms are step-by-step procedures, whether or not they are applied to control situations.
>> b) All problems to be solved are not necessarily control problems.

>> c) Some control problems are solvable by other than step-by-step (algorithmic)
>> procedures. (Analog functions [f(x) = 2x], [f(x) = sin x], [f(x) = x^2] are examples.)

Vladimir Zyubin wrote:
> I will, why not... and it is absolutely clear to me that calculation of "sin x" is just a step, despite of its
> implementation (digital, analog, mechanical, graphical, etc... even if it was got directly from an egregore...esotirec implementation :-)

Sin x can be output "instantaneously" as an analog value as the shaft of a sine potentiometer rotates through successive angles. This component was common in servomotor-based flight simulators for pilot training in the 1950s & 1960s (i.e., before the simulation field went digital).

So the method of implementation is often the key, and is my point in PTQ.

Some functions and operations that can be done simply in time-domain temporal logic hardware, rather than imperfectly in space-domain temporal logic software are: order, persistence, concurrency, and repetition. These operations and others are primitives in PTQ and all work consistently and compatibly in the time-domain with the existing space-domain primitives of conjunction and negation, and their combinations. As an example that such things do in fact exist, I have (previously) described the workings of the operation that determines, on-the-fly (as it happens), the order of two received signals, rather than after-the-fact via computational procedures on values from memory.

Best regards,
CharlieM

[ clip]
CharlieM wrote:
> In explanation, let us examine a simple control function, such as the direction (DIR) controller for an elevator.

> The safety and operational constraints for a direction controller for a two-floor elevator is stated:

> "The direction contactor (DIR) can change only when at a floor while the lift motor is off and after the door has opened and closed and the alternate floor has been requested. The alternate floor request is assessed after the constraints have been fulfilled."

Is DIR a direction controller or a direction contactor ?

> Stated in PTQ English:
> At Floor 1 AND Lift Motor OFF AND Door Open SEQ Closed CREATES Door Cycle Complete WHILE DCC AND Floor 2 Requested CREATES DIR WHILE At Floor 2 AND Lift Motor OFF AND Door Open SEQ Closed CREATES Door Cycle Complete WHILE DCC AND Floor 1 Requested CREATES /DIR WHILE LM Resets DCC WHILE [(LM AND DIR) = UP] WHILE [(LM AND /DIR) = DN]

> Condensed:
> {[FL1 * /LM * (DO SEQ DC)] # DCC} [(DCC * FL2Req) # DIR] [(DIR * LM) = UP] {[FL2 * /LM * (DO SEQ DC)] # DCC} [(DCC * FL1Req) # /DIR] [(/DIR * LM) = DN]

This looks likes cryptified function block display or a sequence of macros ... it's un-readable like a "Lisp" program. Writing such a small application in one line is not an innovation :)

> The real-time logic of this part of an elevator controller can be stated in one continuous line of PTQ parallel-concurrent "source code" and implemented in functional hardware having seven inputs and one output in a configuration of less than 50 equivalent gates.

I would code it directly in VHDL ...

> This controller operates at the speed of the process and with a latency of only six gate-delays in the directly-connected logic (in whatever device technology used). Temporal discrimination is less than one gate-delay.

> Q: How many lines of linear-sequential code and equivalent gates would be used in a TM-type controller?

It would take one single line containing one expression of a macro call :)

Best Regards
Armin Steinhoff

Charles,

You are either a genius or a crackpot - with the words you have written, I cannot tell which. If you have a user manual, I would certainly enjoy reading it. How can I get a copy.

Dick Caro
Richard H. Caro, CEO, CMC Associates
Certified Automation Professional (ISA)
Buy my books at the ISA Bookstore:
Wireless Networks for Industrial Automation
Automation Network Selection
Consumers Guide to Fieldbus Network Equipment for Process Control
===============================================================

By Charles Moeller on 28 February, 2012 - 2:27 pm

Dick,

You wrote:
> You are either a genius or a crackpot - with the words you have written, I
> cannot tell which. If you have a user manual, I would certainly enjoy reading
> it. How can I get a copy.

I'll take that as a compliment after my 50 years of experience with controls, the last 40 as an engineer.

I do have a user manual and now working on other books plus patents. If I take the step of giving anyone a look, it would take a very strong NDA/NUA and include plans toward eventual commercialization.

Best regards,
CharlieM

> First, there are, and have been for decades, microcontrollers that can
> present summed values in very short time intervals. So, while having an
> unclocked 16 bit adder might seem all new and clever, they aren't, and they
> aren't going to solve any of the problems that are being discussed.

This may be the first rationale posting on this thread. Thank you Julie. I too have managed programmers. In my experience, there can be more than 100 times the performance of an excellent programmer over a good programmer. No other profession has this gap in performance. There can also be a great difference between programmers who produce maintainable code, and those who produce write-only code that must be done over rather than maintained. The excellent programmers who worked for me produced maintainable code as well as completing assigned tasks rapidly. Maybe, this was their secret to productivity.

To me the problem was always to "do the right thing" more than "doing things right." Rather than to complete a program that maybe solved a problem, but doing it in record time is not the goal. The goal is FIRST careful definition of the problem to be solved, then to make sure the program did that. I always have insisted that the programmer document the program he/she was assigned BEFORE writing the first line of code. We carefully reviewed this USER MANUAL before the code was started to be sure that it was the right problem/application.

How the problem/application is solved (hardware or software) is irrelevant as long as the definition is correct. Personally, I cannot imagine solving process control problems/applications in pure hardware, but I am open to new solutions. However, do not blame software for all of the ills of digital systems. The solutions are management issues.

Dick Caro

Richard H. Caro, CEO, CMC Associates
Certified Automation Professional (ISA)
Buy my books at the ISA Bookstore:
Wireless Networks for Industrial Automation
Automation Network Selection
Consumers Guide to Fieldbus Network Equipment for Process Control
===============================================================

By Vladimir E. Zyubin on 27 February, 2012 - 6:14 am

Dick Caro wrote:
> To me the problem was always to "do the right thing" more than "doing things right." Rather than to complete a
> program that maybe solved a problem, but doing it in record time is not the goal. The goal is FIRST careful definition of
> the problem to be solved, then to make sure the program did that. I always have insisted that the programmer document
> the program he/she was assigned BEFORE writing the first line of code. We carefully reviewed this USER MANUAL
> before the code was started to be sure that it was the right problem/application.

Mostly agree, but the key goal is to reflect the algorithm, that is already "presented" in object under control, in a structural corresponding description. It provides readability, maintainability, local effects of corrections, etc. The algorithm is in head of the designer of the object.

How high the risk is that the system safety will become a "week point"? Will we be able to keep up with the increasing threat?

Didn't read all replies, so sorry if this a repeat.

But in today's world hardware is useless without software.
Again, so long as we have a world of digital electronics we will be dependent on software.

That's just the way it works. The only thing which can be done is to embed it in a manner where it cannot be changed.

By Curt Wuollet on 9 November, 2012 - 1:27 pm

Actually I am somewhat optimistic. While automation systems today are maximally vulnerable due to their singular dependence on the least secure OS with a truly abysmal security record, the rest of the computing world is busy diversifying. Even if solutions like Android are not particularly secure, (by design) the diversity in itself raises the bar. And it is a door-opener for SELinux and others that are, or at least can be made, secure but can run much of the same software. This is happening quickly with non-Wintel blade servers and clusters adding to the absolutely crazy pace of the mobile market. As long as networks remain the primary attack vector, this unpredictability will make it much more difficult to penetrate systems. At least it will be non-trivial. Who knows, in the next 20 years, the automation folks, or more likely their customers, may begin to demand some semblance of security and Big Automation will belatedly follow the trend. A chain is only as strong as it's weakest link and it may soon be realistic and economic to design that link out. It would be good to do that before the grid goes down or even more dire damage is done, but it surely will be at least talked about after.

Regards
cww

RussB:

Software is a consequence of using data processors or Turing-type machines (TMs) exclusively.

The fundamental problems of TM-dominated systems persist, and will do so for as long as we insist upon solving all our problems via the Turing paradigm: shared resource hardware, software, and linear-sequential operation. I've intimately experienced four decades of "progress," including advances in the millions-fold in hardware capabilities. These meaningful improvements have resulted in Moore's Law effects that have made computing available to everyman at reasonable cost. But a software crisis exists ongoing, prior to, and since E.J. Dijkstra named it in 1971. We know that software can't cure the ills of software because it hasn't in the last 60-plus years, and not for the lack of trying.

My thesis is that the problems of software can be cured by smarter hardware.

The typical machine/assembly code has not made any great advances in decades.

The fundamental logic has not changed much either, nor have computer activities. All higher-level computer languages (i.e., in software) are ultimately decomposable to, hence built up from, sequences and combinations of the Boolean operations AND, NOT, and STORE. In machine language, those operations are used to determine explicitly: a) the locations from which to acquire the numerical or conditional operands, b) what Boolean operations to perform, c) where to put the results, and d) the next step in the program. Every step is, and must be, predetermined. At bottom, that is all a computer can do.

AND, NOT, and STORE. That's three words. Imagine writing a newspaper article (or describing a dynamic process) using repetitions, variations, and combinations of only three root words. So few allowable words or operators (simplistic logic) forces structural complexity and is one of the reasons that software is troublesome. We can simplify the code by having more sophisticated primitive operators in the foundation logic, especially in the time-domain.

If there are but few basic words or operations in any language, then it takes lots of them to put together a process description, or story. Sentences may get very complex due to the repetition of those few words/operators in existence. My approach to a more robust and healthier logic system was to identify the words and operations that would expand the logic experience, with emphasis on the relations of time. Additionally, I devised a temporal logic that would be native to the time-domain, apologies to A. N. Prior. Conventional logics must translate temporal-domain signals into data suitable to the space-domain where they can be manipulated by static operators. My system does not need to translate from the time-domain to the space-domain for logic operations, then translate back to the time domain for useful output. PTQ can determine the truth value of temporally related events and conditions on-the-fly, as they occur in real time, with resultants available within a few gate-delays.

The tasks performed in modern times by computers (and microprocessors and microcontrollers) are no longer confined to translation or transformation of one set of static symbols to another, which was the (presumed) original intent of Alan Turing. Computers are now used to monitor and control dynamic physical processes, while the computers themselves are able to perform series of only static operations as means to those dynamic ends. The memory operator STORE, used to log into memory samples of a process that changes over time, thereby performs a succession of time-to-space translations. This frame-by-frame treatment of a continuous process allows desired static operations to be executed upon the static and discrete values either recalled from memory or directly sampled from external sources. Process-control results are shifted from static repositories within the computing device to the output ports (space-to-time translation). Such approximations of temporal processes can become quite complicated, hard to understand, and in the final analysis, not indisputably correct due to processing delays. Boolean logic and Turing machine principles are anything but fundamental when they are used to deal with time.

The order of events and the chain of cause and effect are usually much more important than how many microseconds each condition lasts, or at what clock-time each occurred in a process. Physical processes start, continue, and stop. They have beginnings and existence extended in time. They end. They repeat. Several conditions can overlap, with different start and stop times for each. A natural language narrative (say, in English) can precisely describe a process having these characteristics no matter how that process twists and turns over time and in space, and all without reference to clock time, the increments of which, in any case, are arbitrary.

Given the above observations, how do we "tell the process stories" using computers with only AND, NOT, (and their combinations), and STORE? It is demonstrably difficult and it is no wonder that software production for large systems is only 50% efficient and can't ship product guaranteed to be error-free.

Copyright 2011 by c.moeller@ieee.org

Best regards,
CharlieM

> Software is a consequence of using data processors or Turing-type machines (TMs) exclusively.

"SNIP"
AND, NOT, and STORE. That's three words. Imagine writing a newspaper article (or describing a dynamic process) using repetitions, variations, and combinations of only three root words. So few allowable words or operators (simplistic logic) forces structural complexity and is one of the reasons that software is troublesome. We can simplify the code by having more sophisticated primitive operators in the foundation logic, especially in the time-domain.

>Copyright 2011 by c.moeller@ieee.org
>
>Best regards,
>CharlieM

Yes it would get complex but remember DNA is mapped using only 4 "G C A T"
And the the whole English language uses only 26.
Down in the trenches digital electronics only uses two (1&0)
I guess it comes down to complexity, as compared to the average human understanding.

RussB,

Note that DNA is formulated with the 4 letters A, C, G, and T (abbreviations for the four different nucleotides). These nucleotides appear in specific groups of three to make a codon (word). The number of different codons, or words, possible is 4 x 4 x 4 = 64. Certain codons are strung in sequences to form the DNA strands. So the root words of DNA are 64 in number.

Twenty six letters make over a million words in the English language.

My dynamic logic can be expressed in combinations of eleven natural language words that together provide 60 different functions that can succinctly capture both static and dynamic behaviors of simple physical processes.

These functions, activated in parallel chains, mirror the process to be monitored and controlled. The functions can all be instantiated by interconnecting small numbers of NOR or NAND gates in a "sea of gates" FPGA. The fact that I can get all these different functions using one building block (2-input NOR gates) is due to the many different ways they may be interconnected.

(Recall that S. Cray built a supercomputer using only one type of NAND gate.)