After Software, What's Next?

B
I can't let this go unchallenged!

Charles said:
> The fundamental logic has not changed much either, nor have computer
> activities. All higher-level computer languages (i.e., in software) are
> ultimately decomposable to, hence built up from, sequences and combinations
> of the Boolean operations (AND, NOT, and combinations) and STORE.

So? That's not necessarily a limitation - it's a fact of Boolean life. Similarly, I could state that "All arithmetic operations are made up from additions and complements" - that's two basic operations. It doesn't stop me from using a defined combination of these to do multiplications, logarithms, or trigonometric functions - I don't have to explicitly use multiple additions to get these results.

Bruce.
 
C

Charles Moeller

Bruce:
>Charles said:
>> The fundamental logic has not changed much either, nor have computer
>> activities. All higher-level computer languages (i.e., in software) are
>> ultimately decomposable to, hence built up from, sequences and
>combinations
>> of the Boolean operations (AND, NOT, and combinations) and STORE.

Bruce Durdle wrote:
> So? That's not necessarily a limitation - it's a fact of Boolean life.
> Similarly, I could state that "All arithmetic operations are made up from
> additions and complements" - that's two basic operations. It doesn't stop me
> from using a defined combination of these to do multiplications, logarithms,
> or trigonometric functions - I don't have to explicitly use multiple additions to get these results.

The conventional systems of logic can only perform operations that recognize or manipulate static values residing in space. Ordinary logic treats variables in time poorly through the constraint of having to translate all temporal signs, signals, and effects into the space-domain so as to be suitable for static combination and arithmetic rules.

The static systems of logic-descended from Aristotle through Boole [1], Frege, Prior, Pnueli, and all modern logicists and natural philosophers-are lacking in several respects. These many models of logical specification [2] are unable to describe or to create any more than was given (sum of the parts), they can't directly express causation (which instead must be humanly divined from static representations), and they can't be used to directly express or treat dynamic or changing scenarios, thus they can't deal directly with ongoing time or processes that evolve with time. Now these observable attributes, including synergy or emergent behavior, cause and effect, dynamic activities and ongoing time, are very evident in the real world. Life would not have survived as well as it has without recognition and beneficial use of these attributes of reality.

One of the troubles with philosophy, logic, "computational intelligence," and other systems of thought is that the formal logic used to specify and substantiate or support concepts and systems is confined to static frames in the space-domain. All temporal information, therefore, must be referred to tokens and labels situated in space. These items of information are made into data, by sampling and storing, after which the only recourse is mechanical data-processing via Turing-type machines (TMs) or manual methods. The ancients played with and dealt with concepts by writing them down and by thinking of them as fixed conditions. We can now do the same using computers, but the logic operators being used have not expanded or grown with the passing of millennia. We are thus limited to combinations and sequences of AND, NOT, and STORE. The whole of computer science is founded on those few operators. First-order and modal logics are fundamentally static means through which actions are reckoned from fixed statements or frames, evaluated after-the-fact. Such static treatment, even aided by super-fast computers, often fails to produce results appropriate for dynamic processes.

Using static and fixed labels, formal logic discourse admits only of <i>existence</i>, both presence and absence aspects, and <i>conjunction</i> (coincidence in both space and time). This package of restrictions in thought excludes dynamics from that frozen arena. But life and other processes exhibit change and self-motivated activities. How can such functions be specified or even explored with logic that allows only static states or static labels about dynamic states? Aside from how a condition or process is and how it relates to other things in tableaux, we want to know or be able to precisely and concisely specify how it came to be, what caused it, and how it acts. There isn't any such treatment in formal logic, although in ordinary language we routinely express dynamics in a way that most understand our meaning.

So, isn't it time for a change?

1. George Boole's <i>The Fundamentals of Thought</i>
2. About thirty "non-standard" logics (aside from predicate calculus and propositional logic) are listed in http://www.earlham.edu/~peters/courses/logsys/nonstbib.html

Best regards,
CharlieM
 
J

Julie in Austin

First, there are, and have been for decades, microcontrollers that can present summed values in very short time intervals. So, while having an unclocked 16 bit adder might seem all new and clever, they aren't, and they aren't going to solve any of the problems that are being discussed.

One item I've not seen beaten to death, like the dead horse that it should be regarded as, is the issue of the qualifications of "programmers". And this, in my 33 years of experience as one, plus about 20 years making programmers do my evil bidding, is where the problem lies. Or, rather, the problem of MANAGERS deciding that Bill, who just got his degree last month in WhizBang Programming Language, is as good a programmer as Sue, who learned some other language, five or 10 years earlier. Mostly because Bill is cheaper than Sue -- that's the real reason many managers like to pretend that the Bills of the world are as good (or better) as the Sues of the world.

The core of the "software problem", in this problem domain, is that "programmers" are neither scientists nor engineers. Many are little more than glorified typists, and many of the glorified ones have atrocious problem solving skills, which is likely why they are neither scientists nor engineers.

When I was studying Mechanical Engineering (my minor at uni) and working for Marine Engineers (how I paid to be at uni in the first place ...) the amount of "testing" that was performed, either for real or with models, far exceeded what I was being taught over in the Computer Science department. If I was designing your basic Warren Truss, or calculating some parameters of a fuel or water tank on a ship, by the end of the exercise I had a very well-defined "thing". I knew where the forces in my truss came from and went to, or I know how my tank was going to affect the ship as it did whatever it did with however much of whatever liquid was in it. My models were tested against reality -- my first ship design project accurately predicted that a 200 ton piece of steel was going to float about 6' deep in the water, with the stern a few inches lower than the bow. They put it in the water 2 years later, and it didn't roll over or stand on one end or the other and sink. THAT is engineering. BTW, I was an undergrad during that particular feat of engineering.

Not at all so for "computer science" / "software engineering".

Efforts at getting programmers to think in terms of "fully describing the problem" are futile because "right" and "good enough" are too far apart in terms of cost -- it's the 90/10 rule. Ninety percent of the code is written in 10 percent of the time. Which means, it takes about an order of magnitude longer to finish that last 10 percent. Which is the 10 percent that makes sure the other 90 percent is working properly.

Efforts at solving the problem revolve around kicking the can down the road, instead of kicking the programmers out the door. Which gets back to the "marketing" thing -- if the cost, net of corporate jets and free soda and pizza for lunch, of "Bill O/S 1.0" is $50, you can forget trying to convince anyone that in four or five years, when the last of the bugs have been worked out of "Bill O/S 1.5 Update 7", they should pay $200 for it. Especially since they can now buy "Bill O/S 3.0" for $50, complete with all the newest bugs that won't be fixed for another four or five years.

Don't believe me? Check out what Microsoft has been doing with Windows XP, Vista and now 7 (and 8 is in the wings). Which is more robust? Which can you readily buy? What would Windows XP actually cost if Microsoft had to keep fixing all the bugs, without adding all the newest features that would drive revenue from the "gotta have the sexy features!" crowd?

And I'm going to wrap this up right about now.

My earliest professional programming gigs, before I decided to be a professional programmer for real, were all Marine Engineering related. Most ships are designed according to a set of rules from the American Bureau of Shipping. They had giant books filled with rules for every aspect of a ship. And when something broke, they'd figure out why, and come up with a new rule, and hopefully things didn't break (and people didn't die) the next time a ship got built. Ship designs were not based on "sexy". The process was not "marketing department driven". The process was based on successive refinement with feedback from real-world experiences. No one was designing "transparent overlay with animation" water-tight bulkheads. But mostly they weren't hiring Music majors (I had one once on my staff ...) to design ships because they could hold a pencil or move a mouse.

The software problem is NOT software. It exists because "good enough" has become the standard against which completion is measured, and "sexy" is the standard against which "better" is measured, and "cheaper" is the standard against which "value" is measured. Verified designs and "provably correct" don't even enter into the picture.
 
Armin Steinhoff wrote:
>> ... and what is a multi-threaded time?

Charles Moeller wrote:
> I refer to the existence and influence of objects and people on intersecting world-lines.

I would never trust control systems based on such a "multi-threaded time " ....

Best Regards
Armin Steinhoff
 
V

Vladimir E. Zyubin

Armin Steinhoff wrote:
>>> ... and what is a multi-threaded time?

Charles Moeller wrote:
>> I refer to the existence and influence of objects and people on intersecting world-lines.

Armin Steinhoff wrote:
>I would never trust control systems based on such a "multi-threaded time "....

I personally still think it can be something interesting under "multi-threaded time", we just can not discuss it because of "cognitive dissonance" aggravated by the wishes to patent the idea. Counterproductive way, IMO, but decision is up to the author.

--
best regards, Vladimir
 
C

Charles Moeller

Julie in Austin:

> The software problem is NOT software. It exists because "good enough" has
> become the standard against which completion is measured, and "sexy" is
> the standard against which "better" is measured, and "cheaper" is the standard
> against which "value" is measured. Verified designs and "provably correct"
> don't even enter into the picture.

I appreciate your viewpoint.

The complicating factor of translation from the real to the artificial spaces of computer memory and back to the real after processing is the hidden problem I am addressing.

Since all software is in the space-domain:

- pick X from there
- place it here
- transform it thus
- put the result there
- pick Y from other place
- decrement it by one
- place it in new space
- ...

but our control problems exist in real space and time, there is a required translation from the real to the artificial via placement in memory. In that constructed universe mediated by numbers and arithmetic, the properties of time are generally taken to be the same as, and in fact are mapped onto, a fourth spatial dimension having the general character of extension, or length. We use counting mechanisms to translate time-ticks into the space domain, as can be seen on the faces of our clocks or the addresses of our control store. This practice fits nicely into arithmetic computers, but it adulterates and obscures the true character of the time domain.

The combination of artificial, predetermined dimensions of space and time, and the limitations of arithmetic operations, force one to record and determine the conduct of processes using successive frames or snapshots according to preselected measurement quanta. The digitization of the space-time functions of a process, however, forever sunders the co-mingled space-time continuum into separate spatial and temporal parameters, which are ultimately relegated to signed and numbered tokens in the space domain. Once those parametric relations are separated, extraordinary measures must be taken, via complex algorithms, to extract meaning from them. Critical inquiry concerning an event requires comparison between stored frames after the fact, and the development or discovery of suitable relationships that could have produced a given frame from its predecessors. Assumptions based upon experiential knowledge are often applied to these phenomena to unravel the quandaries.

In digital process monitoring and control, as it is presently practiced, continuous natural time cannot be accommodated. Time, consequently, is reckoned as successive snapshots of a process in space, with the temporal intervals between frames preselected to be small enough, hopefully, to monitor and control the process adequately. If one visits and records every sampling point and frame, one need not think about the process, except in retrospect. In current digital controllers, data is collected, then decisions are made. The response always occurs well after the event and not concurrently with it. While the computer is busy processing some previously acquired data, it is virtually blind to other occurrences in real time. Reliance upon this mode of operation is a program for disaster, given the possibility of a missed event or failed component. Chaos lurks.

Best regards,
CharlieM
 
C

Charles Moeller

Armin:

Armin Steinhoff wrote:
>>> ... and what is a multi-threaded time?

Charles Moeller wrote:
>> I refer to the existence and influence of objects and people on intersecting world-lines.

Armin Steinhoff wrote:
> I would never trust control systems >based on such a "multi-threaded time "

A pity, as that is what wider words and faster clocks attempt to do, without ultimate success.

Wouldn't it be easier to work in the same environment in which you live?

Best regards,
CharlieM
 
C

Charles Moeller

Vladimir:

> I personally still think it can be something interesting under "multi-threaded time", we just can not
> discuss it because of "cognitive dissonance" aggravated by the wishes to patent the idea. Counterproductive way,
> IMO, but decision is up to the author.

As Ayn Rand wrote, "We exist for the sake of earning rewards."

If I don't find interested parties in academia or enterprise, I will eventually make my method public.

Best regards,
CharlieM
 
B
As I see it, the "software as in the bits and bytes needed to make a computer work" is confused with the "software as in how do I make this system do what I want?".

The difficulty in developing solutions to control problems is in first defining exactly what the problem to be solved is. The definition may well involve physical, mechanical or chemical effects as well as time and numerical parameters. Defining the problem and specifying what is an acceptable response has to be done by people who know what they want the system to do - the end-users (operators and plant managers). This problem definition has then to be conveyed to those who develop the control solution. Once the solution has been developed, exact details of its capabilities and limitations need to be passed back to the end-users for validation and verification.
A control solution has to be "complete, concise, and clear" - the latter word applying to all parties involved in its specification, design, operation and maintenance. So one absolute essential of a control system specification is that it must be understandable by all involved - not just an elite few eggheads who get obsessed by fine detail to the exclusion of the overall performance.

There is enough difficulty in people unambiguously interpreting the specifications using the existing limiter capabilities of Boolean logic. Throw a few strange symbols such as are found in some of the logic reference quoted and this will be much harder.

Note that nowhere above have I referred to "hardware" or "software" solutions - this problem exists whatever the solution format adopted. One of my first tasks as a graduate engineer was to translate the relay wiring diagrams on 40-odd sheets of A2 into a format that could be understood by the operators and maintenance staff - the final format was about 3 sheets of Function Block Diagrams similar to the IEC61131 format. I have found on a number of occasions that a combination of the FBD and SFC formats meets all needs and is quite easily interpreted by most people with a minimum of training in how to read them.

Once the problem definition has been sorted out, it can be passed over to the software jockeys to crunch out the code - or over to the hardware whizzes to put into a hardware format. If the first part is done as it should be, the code or hardware configuration should fall out of the functional specification. It is when the coders or detail designers begin to impose their own ideas on to the solution (often without a full understanding of the issues involved) that things start to turn to custard.
 
C

Charles Moeller

Bruce Durdle:

> As I see it, the "software as in the bits and bytes needed to make a computer
> work" is confused with the "software as in how do I make this system do what I want?".

There shouldn't be any confusion with:
1. the <b>OS</b>, being that software needed to make the shared hardware act like a Turing-type machine (TM), good for acquiring and shuffling data around, and

2. the <b>application</b>, which makes the TM act like a process monitor-controller.

The difficulties with software, I found, are due to the exclusive use of the Turing paradigm. All of software, its rules, its complexities and faults derive from the restriction to Turing’s approach and method (computation). Software-mediated response is always after-the-fact, as it addresses the control situation after the best moment for action has passed. Modern digital control activities are never direct, but depend upon the integration and coordination of at least four separate systems:

* the physical process to be monitored and controlled

* the electronic hardware: microprocessors, sensors and effectors

* the operating system (OS) that enables the available electronic hardware functions to be accessed and exercised on a shared basis

* the application software, a series of instructions that tells the hardware what it is time to do

In some ways, the goals and activities of these support systems are in competition for resources they must share. All efforts to date in the field of computational control systems have addressed the various problems and difficulties that exist and which are created, in part, by this limited choice of the Turing method. Turing-type machines and their necessary software are constrained to work in the space-domain, while the physical processes we wish to control inhabit the domains of space and time. It has proved to be cumbersome, inefficient, and unsafe to effect process control in natural space-time with tools that only work in and upon space, such as microprocessors and software. The required translation of temporal concepts, relationships, and actions to the space-domain (before they can be operated upon by the space-only logic operators) and back again to the time-domain for useful output, only adds to system complexity and difficulty.

The Turing treatment produces a number or condition (or series of numbers or conditions) as its salient output by performing static transformations and translations. The manner in which the numbers or states so produced relate to the physical process being controlled must be determined and referenced by the programmer, who uses lookup tables and numerical and conditional benchmarks for comparison at selected points of the process.

My solution, PTQ, is an alternative mechanical reasoning system for process control that is simpler and more direct than the Turing paradigm. The PTQ method generates a process in its logic elements that takes its cues from, and mirrors, the real physical process being monitored and controlled. The physical (real world) and electronic (ideal) processes are easily compared in a continuous manner for correspondence. Differences can cause process suspension, correction on-the-fly, or an alarm to be raised. The new method works natively and directly in each of the domains of space, time, and (joint) space-time, without translation.

The primitive static operators AND (conjunction) and NOT (negation), and STORE (memorize) in combinations and sequences are necessary and sufficient to generate the whole of computer science. PTQ has grown beyond computation by incorporating, in corresponding hardware logic, seven more <i>primitive operators</i>. These additional operators and their functions describe activities and reactions in the time-domains. They are dynamic operators that, in combination with the conventional static AND and NOT, make it easier to "tell the process stories" (specify processes). Physical processes can therefore be described in more appropriate and natural language which enables one to monitor and control them automatically without run-time software.

Real-time and naturally parallel-concurrent, the electronic hardware corresponding to the additional dynamic operators acts with the process being controlled as-it-happens vs. after-the-fact, as do software-mediated controllers.

In conventional control practice there are four systems that interact, often in competitive ways, as mentioned above. The PTQ method has just two systems working hand-in-glove:

* the physical process and

* the PTQ real-time process monitor-controller

The description of the correct process controller is simply an accurate specification of the physical process being controlled. Using PTQ terminology, dynamic concepts specified for the process are easily implemented in corresponding defined hardware logic elements. The number of languages used in system specification and implementation is limited to one, that being English (which is constrained to the specified operators). A PTQ controller's architecture is expressly suited to the process being controlled because it <i>emerges</i> from the process specification. A change made to the process specification automatically re-determines the logic elements to be used and modifies the controller architecture as appropriate when instantiated. PTQ monitor-controllers are mostly reactive electronic hardware systems that continuously verify the correctness of their own activities and those of the physical processes being monitored and controlled.

PTQ is a more natural and fundamental means of specifying, monitoring, and controlling physical processes than is computing. Since the operators in PTQ also include those which are necessary for computation, there is nothing lost but much to be gained through its use for physical process control. Among the advantages are increased safety, ease of use, simple concepts able to be quickly and easily implemented in corresponding logic element hardware (in FPGAs), natively parallel-concurrent and real-time operation, easy modifications or upgrades via changes to the specification, less hardware, flexible architecture, little or no run-time software, and faster response.

Mainstream thinking leans toward preserving that in which it has already invested so much. As a result, the software industry is still looking for the be-all and end-all "super-software"—a much-improved Turing-type machine—not a better and more fundamental approach like ALS (Westinghouse) or PTQ. At the very least, PTQ can supervise physical processes in ways that are more efficient and not subject to the problems of software.

Best regards,
CharlieM
 
V

Vladimir E. Zyubin

CharlieM wrote:
> As Ayn Rand wrote, "We exist for the sake of earning rewards."

It is up to them to chose their purpose of life.

> If I don't find interested parties in academia or enterprise, I will
> eventually make my method public.

Well, I personally do realise there are a lot of problems with the current linguistic means in automation. And, as I understand, Armin admits the current situation can be improved as well. And there is no need to popularize the idea of changes. So, the question is, what is the changes. If you can not told about it because of "the sake of earning [material] rewards", then... life is very short and there are a lot of other interesting things besides to spend time for decryption of fuzzy allusions that are (I must confess) not understandable to me. As to me, I think, it is more easy to wait for a patent or a publication about the "chrono-synclastic" solution.

with kindest regards, Vladimir
 
V

Vladimir E. Zyubin

Bruce Durdle wrote:

> The difficulty in developing solutions to control problems is in first defining exactly what the problem to be solved
> is. <...> Defining the problem and specifying what is an acceptable response has to be done by people who
> know what they want the system to do - the end-users (operators and plant managers).

You point out the key feature. Any controlled object has control algorithm that is defined during the design process... before the controlled object is made. So, people who know what the system must to do are inventors and designers.

> Note that nowhere above have I referred to "hardware" or "software" solutions

It is quite obvious, the implementation ("how to do") problem is a second-order problem, the first-order problem is to express "what to do" in a maximal seamless form (lack of "seams" between the designer way of thinking and the program form).

> I have found on a number of occasions that a combination of the FBD and SFC formats meets all
> needs and is quite easily interpreted by most people with a minimum of training in how to read them.

Agreed. SFC has features that are close to those that should be, but the way the designer think is a bit differ from SFC conceptual means...
FBD (the data flow concept) has very limited applicability area. IMO.

best regards, Vladimir
 
[clip]
Charles Moeller wrote
> Mainstream thinking leans toward preserving that in which it has already invested so much. As a result,
> the software industry is still looking for the be-all and end-all "super-software"—a much-improved
> Turing-type machine—not a better and more fundamental approach like ALS (Westinghouse) or
> PTQ. At the very least, PTQ can supervise physical processes in ways that are more efficient and not subject to the problems of software.

Every FPGA based control system is software based ... this software is called firmware which can includes lots of faults.

Best Regards
Armin Steinhoff
 
C

Charles Moeller

Vladimir:

CharlieM wrote:
>> As Ayn Rand wrote, "We exist for the sake of earning rewards."

Vladimir Zyubin wrote:
> It is up to them to chose their purpose of life.

CharlieM wrote:
>> If I don't find interested parties in academia or enterprise, I will
>> eventually make my method public.

Vladimir Zyubin wrote:
> Well, I personally do realise there are a lot of problems with the current linguistic means in automation. And, as
> I understand, Armin admits the current situation can be improved as well. And there is no need to popularize the idea
> of changes. So, the question is, what is the changes. If you can not told about it because of "the sake of earning
> [material] rewards", then... life is very short and there are a lot of other interesting things besides to spend time
> for decryption of fuzzy allusions that are (I must confess) not understandable to me. As to me, I think, it is more
> easy to wait for a patent or a publication about the "chrono-synclastic" solution.

Thank you for your patience.

Best regards,
CharlieM
 
C

Charles Moeller

Vladimir,

You wrote:
> It is quite obvious, the implementation ("how to do") problem is a second-order
> problem, the first-order problem is to express "what to do" in a maximal
> seamless form (lack of "seams" between the designer way of thinking and the
> program form).

That is a very nice way of expressing the real problem, Vladimir.

I object to the method of TM, shared resources, and software because it is several steps removed from reality. Using computation, we act on the value and locations of tokens that supposedly are good representations of samples taken from the process, but we can not be completely assured that is the case. TM processing puts many many components and factors subject to faults between the real process and that monitoring and control means.

I have devised a better way of specifying processes that can be directly implemented in hardware that performs immediately in a stimulus-response manner. My method depends only upon precise specification of the actual process in terms of the allowable PTQ operators.

Best regards,
CharlieM
 
C

Charles Moeller

Armin,

Charles Moeller wrote:
>> At the very least, PTQ can supervise physical processes in ways
>> that are more efficient and not subject to the problems of software.

Armin Steinhoff wrote:
> Every FPGA based control system is software based ... this software is
> called firmware which can includes lots of faults.

Your statement, "Every FPGA based control system is software based ..." is not strictly correct.

It is true that a computer-based software system is used to <b>configure</b> FPGAs, but only in certain cases is <b>run-time software</b> used to activate FPGA functions. In those cases, the FPGA (or part of it) has been configured to run as a TM-type machine.

In the monitor-controllers I design, Xilinx XPLA Professional software (or a current version) is used to program (configure) the logic elements and interconnection pattern in a Cool Runner complex programmable logic device. The resulting hardware configuration runs by itself, given the appropriate stimuli. There is no need for run-time software. Systems such as these are not "software based" i.e., they do not run on software, although they are software-configured to be sure.

Best regards,
CharlieM
 
Some additional comments ...

> [clip]
Charles Moeller wrote:
>> Mainstream thinking leans toward preserving that in which it has already invested so much. As a result,
>> the software industry is still looking for the be-all and end-all "super-software"—a much-improved
>> Turing-type machine—not a better and more fundamental approach like ALS (Westinghouse) or
>> PTQ. At the very least, PTQ can supervise physical processes in ways that are more efficient and not subject to the problems of software.

Armin Steinhoff wrote:
> Every FPGA based control system is software based ... this software is called firmware which can includes lots of faults.

IMHO ... the subject of this communication thread is wrong. There will be always software necessary even if you program "programmable hardware".

Under development are programming languages which are able to express timely dependencies:
Giotto (http://embedded.eecs.berkeley.edu/giotto ... time triggered) or
Lustre (http://www-users.cs.york.ac.uk/~burns/papers/lustre.pdf or
http://www-verimag.imag.fr/~halbwach/PS/tutorial.ps ... with some elements of temporal logic)

A list of Synchronous Languages:
http://rtsys.informatik.uni-kiel.de/teaching/ss08/v-synch/lectures/index.html#lecture16

Secure and time oriented languages are available for the production of safe software ...

Best Regards
Armin Steinhoff
 
Charles Moeller wrote:
>>> At the very least, PTQ can supervise physical processes in ways
>>> that are more efficient and not subject to the problems of software.

Armin Steinhoff wrote:
>> Every FPGA based control system is software based ... this software is
>> called firmware which can includes lots of faults.

Charles Moeller wrote:
> Your statement, "Every FPGA based control system is software based ..." is not strictly correct.

> It is true that a computer-based software system is used to configure FPGAs, but only in certain cases is
> run-time software used to activate FPGA functions.

After a cold start of a FPGA based system you have to upload in all cases the firmware to the FPGAs in order to configure the FPGA. This firmware is stored in serial EPROMS or flash memories. The firmware is software and can include a lot of failures which are also hard to fix.

> In those cases, the FPGA (or part of it) has been configured to run as a TM-type machine.

> In the monitor-controllers I design, Xilinx XPLA Professional software (or a current version) is used to program (configure)
> the logic elements and interconnection pattern in a Cool Runner complex programmable logic device. The resulting
> hardware configuration runs by itself, given the appropriate stimuli. There is no need for run-time software. Systems such as
> these are not "software based" i.e., they do not run on software,

The software is represented as links between the gates of the FPGA ... if one link is wrong the hardware will not work as expected.

Best Regards
Armin Steinhoff
 
V

Vladimir E. Zyubin

CharlieM wrote:
> I object to the method of TM, shared resources, and software because it is several steps removed from reality.

OK. The "TM metod" is bad, but the modern computer architecture does not prevent to use other methods, based on lambda-calculus, for example.

> I have devised a better way of specifying processes that can be directly implemented in hardware that
> performs immediately in a stimulus-response manner. My method depends only upon precise specification
> of the actual process in terms of the allowable PTQ operators.

Well, I do not know what the PTQ operators are, but it does not matter, because the right question is, can the PTQ operators be implemented on the modern computer architecture or cannot.

If the PTQ operators can be implemented on the architecture, then your invention can be divided on two parts: a way of specification and a way of implementation. And the parts should be discussed separately ("divide and conquer" principle). If they cannot be implemented, then it is a real academic result, and you need no to think about money and job at all, because you could earn a lot of money as an invited lecturer.

best regards,
Vladimir
 
Top