Temperature Control PC based

T
I realised that I'd misinterpreted Curt's comments, and acknowledged this in a personal e-mail to him (it takes an age to get a comment onto the list and I didn't want to start a delayed action flame war - one day I will actually follow my rule of reading an e-mail three times before assuming I understand what it says!).

Nonetheless the argument about overall architecture stands. Devices using Linux require a great deal of hardware resource just to run the O/S. By centralising function (in this case the I/O is centralised even if distributing overall function) you are concentrating resource physically into one box which leaves you vulnerable to failure, and ties you to a generalised supplier. Smaller smart devices or subsystems using (approriate levels of hardware resource) allow divide and rule approach to problem solving and autonomous segments, using the best device (i.e. coming from those with the highest level of expertise) for the job. Brute force network data transfer is replaced in this model by appropriate data transfer of just what is necessary to co-ordinate the operations, with better use of bandwidth.

As someone working for a company producing smart devices such as these (temp controls and drives), you'd expect me to say this, but I do personally
believe it is way forward. Analogies are everywhere (not least the way PC technology is used), but possibly the best are mammals (be small, find a niche, take over the world)!

Cheers

Tim
 
Well suppose the people in the 60's starting off with a central computer for control did so because it was the right thing to do. Of course things have changed enormously since then in terms of computing.

A plant should set up to run like the net - the Internet. (We all agree the internet is a great thing right?). Internet sites are big business and have to be reliable. Just as reliable as a plant.

Plant is made up of 1. Users and 2. Production machinery. Existing systems serve the machinery well but are really inefficient for the users. I see the requirement for high levels of training, expensive specialized hardware & software tools and overly complex systems with way too many gateways, interfaces and all kinds of in-between software. I call it a "dog's breakfast". Some of the systems I've seen are a total mess, and run like it too. And in all fairness there has not been much choice in the market to streamline installations.

To simplify and reduce cost why not have users accessing <-> servers <-> to field information? The cost of using IT technology is low, and so are the comparative training requirements. Servers can access I/O directly or smart devices. Doesn't matter really how the servers get the data, however smart devices should only exist if there is a good need for them, because it is unlikely that these smart devices can be maintained via the common user "thin" terminal that accesses the servers.

In this structure the data from the machinery is freely accessible to everyone that needs it, or every system that needs it. Quality programs, historical data, maintenance, operations are no problem. Little is required for spare parts, easy maintenance, and great process visibility are some of the benefits.

Gathering gobs of simple I/O data into a server is not a problem - e.g. 1000's of I/O on Profibus. Trying to manage smart devices on a I/O network is a problem (e.g. Foundation Fieldbus). Got proof of both.

The problem with smart devices or specialized control devices like a PLC or DCS controller is that as the process gets more complicated and larger, you stuff more code into these devices, and you need lots of them. In our current environment they are maintenance hogs. They choke on overruns if pressed into anything onerous. The databases are split, duplicated at minimum twice, maybe three or four times. And they are expensive. Far more expensive than using your average real computer running a quality OS (like Linux) with 20 times the processing capacity.

If you need live proof of a 3500+ I/O plant system operating on one Dell Server in a must-run situation, give me a call (250) 724-1402. I'll give you directions etc. and pick you up at the airport.

Paul Jager
www.mnrcan.com
 
C
Hi Tim

Just a short note. You misinterpret again. It's probably my fault. In the peer to peer scenario each node could have as much or as little I/O as its location or function demanded and all would be addressable in the same time and manner as local IO. From the node it's physically connected to or any other node. Neatly sidestepping the difference between local and remote IO. It's all the same distance and identical as far as access is concerned, There is no single point of failure unless you chose to to it that way. I wish I had a way to build the hardware and code this. I suppose for demonstration purposes I could use a half dozen old PC's. Smart devices are cool too. But, right now a custom device costs as much as an SBC. It's about volumes. I'd like to see smart devices with Ethernet transport. It'd be great when I want to put a point or two someplace where there's Ethernet already.

Regards.

cww
 
T
> A plant should set up to run like the net - the Internet. (We
> all agree the
> internet is a great thing right?). Internet sites are big
> business and have
> to be reliable. Just as reliable as a plant.

Is the Internet a great thing? Well yes and no. September 11 demonstrates very clearly both sides of the coin. Those parts of the system relying on
big central servers to carry news were completely swamped. Whereas e-mail routed round distributed servers (of variable size and capacity) worked very well.

So here is the problem with centralisation in a nutshell, that any user of a mainframe or mini in the dim and distant past will recall. More users and more applications adds more than serially to the load and response time, as overheads stack up, and eventually the system stops. The only answers are (a) control access by limiting users/applications (effectively losing the benefits of open access), or (b) throw increasingly powerful CPUs and faster networks into the mix.

Whereas distributing the servers (no-one is arguing that client/server architecture is a good thing) also distributes the load of any section as a parallel function, so overall response system time remains close to constant. I won't repeat the arguments about the other benefits of distributing smart devices close to their point of application, but one thing I didn't say was that the capability of the devices is a simple function of the job they do, not of the overall (and variable) system load they must support. So they can stay in place literally for decades.

I don't (yet) buy the use of Ethernet as an all purpose automation network, by the way (as a data conduit up to IT, certainly, but not running round a machine). It looks like it adds cost and potential problems with no clear benefits over existing field networks which were after all designed for purpose. And it does implicitly scale up the sorts of control nodes that can
be used into devices capable of running an Ethernet stack (although I can see this changing over time).

I'm quite sure that applications based on a central PC work - the question is whether this is the best way to go, which is very largely a matter of opinion. If you care to get on a plane, I'll meet you at the airport and take you to meet a bearded lady I know - some people find her attractive (and I'm very fond of her). But whether feminine facial hair is the way to
go for the general population? Walking around my own district, I see far more clean shaven women. Seems to work for them.


Tim Linnell
 
T
I don't really argue with many of Curt or Gregs' points in the last batch received - I think all of us are arguing for use of distributed and appropriate technology. The differences are really relatively minor, and really constrained by what we want to achieve - I want to use small microcontroller based devices close to point of application, so my architecture requires lower bandwidth networks, acting as function and status servers (i.e. a temperature controller taking setpoint and returning PV and alarm status, or a small PLC acting autonomously and returning exceptions and summaries.

Curt (if I interpret correctly, for once!) wants to use a complete and globally accessible I/O database, which requires capable processors running a fast network. This implies some degree of physical and logical clustering of function to allow the hardware costs required for the overhead of running (say) Linux and a reasonably capable high speed Ethernet implementation.

As Greg notes, if more capability costs the same and has the same footprint, there's no problem, and so choosing one or other of the architectures is really down to judging tradeoffs and relative benefits. I think you could quite easily justify a mixed approach, where Curt's larger I/O servers coordinated groups of my specialist highly local slave sub-systems, and I guess that if it were my plant, this is most probably what I'd be looking at.

Anyway, the reason I made the mammals (from Shrew through Blue Whale!) analogy is really because fundamentally I think I'm arguing for diversity and specialisation. I do actually believe (short pause while I whistle the corporate anthem...) that, after quarter of a century of experience, Eurotherm does a better job of PID temperature control than a generic software block would, or that AB's reliability and longterm spares availability makes a PLC a better bet long term than a commodity PLC. Ultimately if specialisation and expertise has a benefit, and a value - which I think it has - then it's worth finding some way of incorporating it into your model, and being able to buy a relatively cheap and autonomous block of functionality off the shelf seems to me to be a good way to go to me.

I suppose you could take the view that the Linux I/O box is sort of like Homo Sapiens, i.e. an adaptable all purpose solution (even when female and bearded (see previous post to list)) with a more or less common and standard comms protocol. But if I was looking for truffles, I'd most definitely use a dog or a French pig, whatever robotic Gnus the open source community came up with! And I most certainly wouldn't use a Microsoft product...

Cheers

Tim
 
Tim Linnell beats an interesting drum here and I'd like to extend this thread beyond the simple (or not so simple) topic of mis-using a PC for process control. Whereas I spent 21 years side by side with Tim in the industrial sector, i have migrated to residential and I can clearly see that no residential and few commercial networking or control technologies should be applied in industry. Look at the two newest home automation standards, UPnP and OSGi. UPnP, created by Microsoft has no standard provision for alarms or guideline for user interface. A supervisory system must have intimate knowledge of the control device to figure out whether there is a problem. As for OSGi which allows applications to be downloaded to a local CPU and managed by a server is a Sun concoction. It has no provision for multiple applications accessing the same control device. At least there is no clear manner that I see.

Both skirt the issue of security. Someone started a thread the other day about using a wireless LAN to access his PLC-5. Not on my shift Captain. It is totally unsecure. There was a news clip on MSNBC several months ago about people positioning themselves outside offices so they could steal bandwidth over the corporate 802.11b LAN. I can't get broadband in my neighborhood so every now and then I fire up my wireless web tablet to see if there is any free 2.4 GHz floating around - no luck yet.

I caution anyone who tries to use COTS equipment in an industrial application. Industrial stuff is more expensive for more reasons than just production volume.

Cheers to Tim's bearded lady,
Mitch
 
C
Hi Tim

Tim Linnell wrote:
>
> I don't really argue with many of Curt or Gregs' points in the last
> batch received - I think all of us are arguing for use of distributed
> and appropriate technology. The differences are really relatively
> minor, and really constrained by what we want to achieve - I want to
> use small microcontroller based devices close to point of application,
> so my architecture requires lower bandwidth networks, acting as
> function and status servers (i.e. a temperature controller taking
> setpoint and returning PV and alarm status, or a small PLC acting
> autonomously and returning exceptions and summaries.
>
> Curt (if I interpret correctly, for once!) wants to use a complete and
> globally accessible I/O database, which requires capable processors
> running a fast network. This implies some degree of physical and
> logical clustering of function to allow the hardware costs required
> for the overhead of running
> (say) Linux and a reasonably capable high speed Ethernet implementation.
>
> As Greg notes, if more capability costs the same and has the same
> footprint, there's no problem, and so choosing one or other of the
> architectures is really down to judging tradeoffs and relative
> benefits. I think you could quite easily justify a mixed approach,
> where Curt's larger I/O servers coordinated groups of my specialist
> highly local slave sub-systems, and I guess that if it were my plant,
> this is most probably what I'd be looking at.

It goes beyond that though, It's not elegance for the sake of elegance or high technology just because we can and it's affordable nor even to contrast the efficiency of openness with the tower of Babel with it's hideous duplication of effort. To get past the roadblocks and make distributed computing in automation popular and hopefully marketable, It has to be simple enough for the electrician to cope with. Or at least that's the message I get. In other fields we would seek to exploit the technology for more complex functionality or more speed or other common goals. Here we can use the additional horsepower and sophistication to provide a model that can be thought of as a single machine adhering to the prevailing paradigms yet providing the benefits of distributed processing. It's even possible with high function hardware, to provide a frontend that presents the whole works to the human as a single, simple, albeit large, PLC for programming and analysis and distributes the tasks heuristically to the various nodes based on locality of IO and load balancing or other criteria.

This is not unlike the Beowolf concept and the model fits because the PLC concept is that of a parallel machine.

This is just a "back of the envelope" sketch I did as an example of how we can use geekware to benefit a crowd that abhors complexity or learning or even change, no matter what the benefits. Since it seems quite unrealistic to change minds, the task becomes one of advancing the state of the art without requiring effort from the target audience. Capable machines and a flexible and configurable OS can do that. Typical DCS schemes go the wrong direction, perhaps they haven't been listening to the same folks I have.

> Anyway, the reason I made the mammals (from Shrew through Blue Whale!)
> analogy is really because fundamentally I think I'm arguing for
> diversity and specialisation. I do actually believe (short pause while
> I whistle the corporate anthem...) that, after quarter of a century of
> experience, Eurotherm does a better job of PID temperature control
> than a generic software block would, or that AB's reliability and
> longterm spares availability makes a PLC a better bet long term than a
> commodity PLC. Ultimately if specialisation and expertise has a
> benefit, and a value - which I think it has - then it's worth finding
> some way of incorporating it into your model, and being able to buy a
> relatively cheap and autonomous block of functionality off the shelf
> seems to me to be a good way to go to me.

Certainly experience and depth add value, it's undeniable some folks are better at some things than others or equally good at different approaches. Imagine if all these pockets were to work together and get pointed in the same direction. They could make super solutions both deep and broad. That's what we are trying to do. Provide a neutral vehicle for this sort of concentration of effort and pooling of resources. That's what excites me, that it would take so little cooperation to outpower the giants and so little standardization to outpace them. They are made vulnerable by their refusal to cooperate and standardize and shackled with the burden of developing everything individually.
>
> I suppose you could take the view that the Linux I/O box is sort of
> like Homo Sapiens, i.e. an adaptable all purpose solution (even when
> female and bearded (see previous post to list)) with a more or less
> common and standard comms protocol. But if I was looking for truffles,
> I'd most definitely use a dog or a French pig, whatever robotic Gnus
> the open source community came up with! And I most certainly wouldn't
> use a Microsoft product...
>
I'm not into the evolutionary analog. It leads one to think that the future must be consistant with the past.

Regards

cww
--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned Industrial Automation Software For Linux. mat.sourceforge.net. Day Job: Heartland Engineering, Automation & ATE for Automotive Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to Linux.
 
C
Hi Mitch

I would be the first to agree that equipment must be matched to the environment and some of the cost of typical hardware is due to extended temp ratings, etc. But even the volume, commodity silicon is typically available in industrial and military grades at a fairly modest premium. And an awful lot of control equipment never sees extreme environments. That combined with the face that the most recent processors and logic are typically much lower power devices to suit battery operation means making hirel equipment is much easier, The free enterprise solution is for enough people to want a particular platform to make appropriately hardened equipment a commodity. Sort of a white box automation platform. This is something I'm actively persuing as all the people who are doing or want to do PC compatible control represent a market big enough to make this possible. The technology is there and priced very favorably, it's primarily a matter of packaging it for this market and making it the most attractive choice compared to the rather expensive competition. Watch this and the mat list for ideas I'm currently wrestling with. The idea I had this weekend is sounding pretty good.

Regards

cww
 
I started by saying the structure of the internet is very useful for users. The same usefulness of users accessing data can be achieved for industrial facilities, but the software to do this properly is really rare right now.

* A plant should set up to run like the net - the Internet. (We all agree the internet is a great thing right?). Internet sites are big business and have to be reliable. Just as reliable as a plant.

> > Is the Internet a great thing? Well yes and no. September 11 demonstrates very clearly both sides of the coin The structure of the internet is clearly very useful. The marketplace recognizes this and it has made "web enabled" software a hot ticket. Most vendors have limited abilities via a browser however.

> So here is the problem with centralisation in a nutshell, that any user of a mainframe or mini in the dim and distant past will recall. More users and more applications adds more than serially to the load and response time, as overheads stack up, and eventually the system stops. The only answers are (a) control access by limiting users/applications (effectively losing the benefits of open access), or (b) throw increasingly powerful CPUs and faster networks into the mix.

IT trends are working in the favor of systems such as automationX. Every year there is a doubling of server power for the same price. It's awesome! We are just replacing hardware, with distributed software components. The software is distributed in space amongst a server environment. It makes so much sense to do this. Each hardware device that's out there, in particular those that harbor processors require high up-front costs, installation overhead (such as stable power) and maintenance. Essentially replicating these hardware components in an IT system drastically reduce the price. In automationX you can update the functionality of 100 temperature controllers that do exactly what a hardware based version does, in a matter of seconds. Impossible to do with physical devices. As far as clients are concerned, the protocols are very efficient. Memory and display complexity determine server resources. Typically -compared to business IT apps, there are not that many users that need to tap into an industrial area at one time in practice. We have never exceeded this limit to date.

> I don't (yet) buy the use of Ethernet as an all purpose automation network, by the way (as a data conduit up to IT, certainly, but not running round a machine). For field networks Profibus does a great job. Ethernet might as well, we haven't tried a large install as yet based on an Ethernet network. The top end is all Ethernet of course.

>I'm quite sure that applications based on a central PC work - the question is whether this is the best way to go, which is very largely a matter of opinion. If you care to get on a plane, I'll meet you at the airport and take you to meet a bearded lady I know - some people find her attractive (and I'm very fond of her). But whether feminine facial hair is the way to go for the general population? Walking around my own district, I see far more clean shaven women. Seems to work for them.

I disagree. It is much more than opinion. It is value that drives the innovation. A client/server network, that optimally replaces hardware with software, that leverages IT technology in a prudent way, that provides a MUCH higher level of performance than any traditional approach at a lower price. One that can be delivered faster and more accurately, and requires less maintenance once installed therefore delivers a clear value advantage over current models of automation. With such a system the decision is a logical business one that makes a lot of sense. Last week we did meet with an executive of a major oil company. She said and I quote "I want rid of that DCS those PLC's and the HMI system I have there, and replace it with a single computer with a hot mirror. Can this be done?" We said, "That's refreshing, to be honest that's the first time we've heard it worded that way, of course we can" I didn't notice any beard.

Paul Jager, CEO
www.mrncan.com
 
Yes, PC-based temperature control is quite possible. Most temperature loops are quite slow and don't need a 'real time' OS. We use a PC desktop running Windows 2000 to control 87 loops - and the PC is also running Wonderware Intouch at the same time. The PID control runs in the background. We use PWM (pulse width modulated) solid state relays for power switching to the electric heater elements. We use Adam serial I/O modules for the T/C's and relay outputs. The Adam serial I/O is very economical.

Adam also supplies free OPC driver software with their modules that communicates to the PC-PID.COM pCon PID controller program running in the PC.

Hope this helps.
 
> I want use the PC itself for controlling the temperature. My question to the community is whether anybody has done this?

To get back to your original question, yes, we have been using PCs to control temperature (of plastic extruder barrels (split range heat/cool) and melt pipes) for many years with the benefits of an integrated HMI, 64-bit accuracy, networking, Excel file records, data logging, bar charts, trend strip charts, recipe management, daily operator logs, five security levels, on-line help, etc.

>any potential problems?
Provided the customer does not connect the PC to the Internet and downloads lots of junk shareware, freeware, screensavers, virus scanners, etc. we have found PCs are just as reliable as PLCs. Once in a while (about once per 10-computer-years) the hard disk fails and files get corrupted. This is usually due to dust or vibration. So you should use a 'flash' solid state hard drive or, at least, have a Norton Ghost CD backup copy of the HD ready should it need replacement or re-loading.

Warren,
http://www.pc-pid.com
 
Top