Temperature Control PC based

K

Thread Starter

Kishore

When my company moved from VAX or PLC based systems to PC based automation, we carried some legacy hardware with us. For e.g. we still use a (microcontroller) based temperature controller and the PC communicates with it setting and
reading some parameters. I want to change the way this works in our company and use the PC itself for controlling the temperature. Most of our equipment needs a A/D and D/A cards and I could use an unused channel for interfacing.

My question to the community is wheter anybody has done this and if yes, are there any potential problems? I am confident, but I would be glad to forward your comments to my department for approving me to go ahead with the project.

Currently we use VC++ 5.0; NT 4-SP3 and the platform is a Intel pentium industrial computer with 256 KB RAM. Our scan time is 10ms which is not to be sacrificed.

Thank you for your inputs,

Nilesh Pradhan
Texas Instruments Inc.
Attleboro, MA
USA
 
256 KB ...I think you mean 256 MB.
Temperature control is really easy. Try a product like Labview to start.
 
J

James Fountas

>Our scan time is 10ms which is not to be sacrificed.<

My experience tells me that PCs have interrupts, software demands, and hardware demands that can make small update times impossible to guarantee.

On the other hand PLCs may not update analogs, and PID loops at the same rate as the scan time.

Jim
 
P
10 mS for temperature control? Why such a resolution? Normally there is enough thermal lag in any temperature system that makes scan times in the order of 500mS or more just fine.

>My question to the community is whether anybody has done this and if
>yes, are there any potential problems?

Like a lot of users of PC you are seeing a natural progression - the desire to move software out of arcane devices (micro controllers, PLC's, etc.) and into the mainstream IT environment.

That is what we do, servers run uninterrupted for long periods of time i.e. years. Properly designed software will protect control at scheduled scan times just like a dedicated controller. We've proven this over the past 6
years. Lowest we can go in standard product form is in the 10-20mS range for scan times.

Paul Jager
CEO
www.mnrcan.com
 
S

Sasko Karakulev

Hi, it is not good to use PC in the closed loop controls. Windows unstability may cause control
and process failing down with big damages. The right way is to use dedicated PLC controlers for
proces control and networked all of them with PC by the FieldBUS, Profibus, ModBUS or ASCII
protocols or some other networkewd solutions.
You will find more infos on "http://www.advantech.com":http://www.advantech.com
regrds
Sasko Karakulev
 
R

Ranjan Acharya

<clip>
it is not good to use PC in the closed loop controls...
</clip>

I think that there are a lot of people who would disagree with this statement. You can find successful closed-loop control systems implemented on a PC platform using Windows NT, Windows 2000, DOS, QNX, Linux, various flavours of RTOSs and so on. You can obviously also find successful PLC-based solutions too.

There are manufacturers of both PLC-based control products and PC-based control products. Each have their own merits and flaws.

Most automation solutions can be solved with either platform, very few can only be solved on one platform. For example, if you were going to do something strange with _very_ high speed requirements (sub millisecond speed), then you probably could not do it with a PC, but would require a unique PLC and so on.

R

 
C

Curt Wuollet

And the situation matters. For building environmental control where a half hour to discover the problem and reboot isn't a big deal, even Windows would do. For heat treatment of missile bearings or wafer heat profiles or anything where a failure is costly or dangerous, the list would narrow pretty fast. For the original question of replacing diverse dedicated controllers with central PC control, I can't see how the reliability or functionality would be improved except with the most reliable on Ranjan's list and very serious programming. Just whipping out some VB, even if it works well would very likely tank your mtbf. Dedicated controllers tend to be reliable. And an individual controller failure would usually be less costly than failure of several proecesses at once even if infrequent. Bottom line is that it makes sense only if it is a clear improvement.

Regards

cww

--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned Industrial Automation Software For Linux. mat.sourceforge.net. Day Job: Heartland Engineering, Automation & ATE for Automotive Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to Linux.
 
I have been doing industial refrigeration PC based control since 1993 and the company has systems running since '87. In 1999 we replaced
some old 286 PC's that had been running since the late 80's and would still be running today. While there are critical applications that
PLC's are still the way to go, temperature control is not one of them. This is an industry where PC control far out shines the PLC. Through PC control we give our customers flexability, reporting, archiving, trending all notto be approached by a PLC. We can do online changes while the plant is running, get into the running program to help the operator trouble shoot a hardware problem, or the operator can call in from anywhere in the world and check on his plant, download history etc.

While there are times when a PC may lock up these are few and far between and well within the tolerance of a Refrigeration facility. One to 2 times during a year to me would not seem too many. The hardware we use is rock solid. Places with 1000 analog and digital points might
call for 1 or 2 replacements per year. As far as reliability, with over 100 systems out there there are 2 of us that do the programming, and only 1, me, that does the trouble calls. The average customer calls maybe 2-3 times per year, and they are usually turn out not to be the computers fault.

For you application a PC control system would be perfect. PID temperature control would keep your rooms at +/- .1 F.

Hope this unbiased statement helps clear the air of those stuffy PLC users. Felt good to me.
 
J

Johan Bengtsson

Of course it is possible to do closed loop control with all of the mentioned below, but as for the original question where 10ms was demanded plain windows (any falvour) and linux simply would not qualify because the task sheduler work in the range of 10-16ms and you can never guarantee less than 2-3 times that (and even that is hard to guarantee actually).

We have a training software where we successfully run several PID loops in windows (all flavours) both controlling internally simulated processes and externally connected process models in real time and below 100ms is hard to do reliably.

Of course there are add ons to both windows NT and linux making it possible to do it a LOT better but without those it is hard.


Then the questions comes back to what temperature really needs to be controlled that fast, but that is another question....



/Johan Bengtsson

Do you need education in the area of automation?
----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: [email protected]
Internet: http://www.pol.se/
----------------------------------------
 
C
PC for control are great, it's the software choice I have reservations about. Today's PC's have a pretty good mtbf, comparable to PLC's in my environment. Hardware failures are very infrequent. I just wouldn't use Windows to control my porch light. As I've mentioned before, my shop is populated with "unreliable" PC's I fixed with a CD. So my passion for PC control is strongly conditional and my advice was based on that. With that proviso, you won't find a stronger advocate for PC control. If this guy is gonna write some VB to replace dedicated controllers, that is a world of difference from using a RTOS or Linux to do so. That is IMHO the biggest obstruction PC control faces. Right now, PC control is synonymous with the Windows experience. Pretty tough to sell PC control to participants in that experience. We will never get past the "PC's are junk" argument until that changes. Yet people are unwilling to change. It's very frustrating.

Regards

cww
 
That's right Curt. The offerings of the major vendors are for the most part Windows based, and this is unfortunate. An experience with Windows industrial products would lead one to believe a superior solution to traditional methods is not possible.

Serious software written for Linux, Unix, NT/Nutcracker, QNX etc. performs side by side above the best of PLC's and DCS. It is critical to appreciate the difference. Once you've rounded that corner, you won't return. You'll be
looking to put your temperature control loops to 6000 I/O plant on a server.

Paul Jager
CEO
www.mnrcan.com
 
Unless the temperature control is to modulate something like a flame on a
thin surface, a scan time of 0.5+ seconds is more than adequate.

DCS is so expensive that to maximize system capacity, 90% of industrial
loops scan at 0.5 to 1.0 seconds. These are flow, pressure, level,
combustion, etc. Who needs faster scan for temperature?

A properly tuned and designed server will run gobs of PID loops (1000's) at 100 Msec or less with utmost integrity. The servers are only limited by the speed of the Profibus, Ethernet, DeviceNet etc. for I/O access. DCS controllers can't even come close for capacity. The reason DCS controllers choke is they use processors that lag way behind IT products and sometimes are limited by RAM.

Paul Jager
CEO
www.mrncan.com
 
C
Another facet is physical. Right now if you could magically make your PLC scan at 1 usec. you would switch it back fast as chaos would result.
The bandwidth for control at these speeds is simply impossible with random wiring, wires in conduit, noise, cross talk, and all the other
uglies of getting high speed edges and pulses from place to place. As long as the 1msec, threshold isn't crossed all these things can be
brute force filtered and (mostly) ignored. This is a tradeoff to keep things in the electricians scope, rather than requiring controlled
impedances and transmission lines. If you put a good scope on typical PLC wiring sometimes it's really amazing that this stuff works as well
as it does. Complexity will rise exponentially with higher speeds and it's no mean feat to get things done faster and maintain this ease of
use. That's why a lot of the arguments about Ethernet not resolving collisions fast enough and latency and scheduling delays on Linux, etc.
are red herring. There are only a few applications that can make use of higher speeds like servo loops. For the general mess of sensors
and actuators, the active hardware and software is not going to be the limiting factor. And PLC's built in recognition of the physical constraints are unlikely to fill a 10mhz pipe.

By going to a software PLC model most processing can be decoupled from the IO tedium. Any of today's PC processors can do vast amounts of work in 1msec. It will require great ingenuity and engineering _specific_ to our small market segment to overcome this disparity. It's simply not going to happen in a one-loop program on a 80186. And server class requires a certain size phyical plant to be cost effective. The in-between world will be best served with PC's and a flexible OS. Most needs can be met without specific OS support. With the current state of the art, some will require very specific OS support to be safe, reliable and auditable. This specialization leaves MS out. Right now, logically, the best engineering solution is Linux with systems like MAT PLC and a distribution purpose built with automation features because _we_ can do that with Linux. And it would be a level playing field and inherently standardized not to mention open and auditable and scrutinized by many eyeballs. Please tell me what part of this doesn't make sense or is anything but good engineering analysis.

Regards

cww
 
T
The posts from Curt and Paul Jager interest me in that in that in effect they are suggesting a return to a centralised approach to control - very much shades of the 1960s and mainframe computers serving terminals in my view.

There are several problems with this approach. One is, as Curt notes, that it is not possible to arbitrarily extend the capability of a single central device, since the complexity of communicating all the information required back to a central point overwhelms the computations. Most importantly in any real situation is that it provides a massive vulnerability - if the CPU/Server goes down, everything stops. Looking at temperature control in particular, since the thread started from here, this can cause significant costs as zones fall out of operating temperature, and even physical damage (for example cracking of a vessel or breaking of an extruder screw).

I would suggest a better architecture for dealing with complex situations at high speed is multiple smart devices capable of independant operation passing status and synchronisation (not I/O) information between themselves. This allows sub-devices to be added into the system wherever needed, without affecting the overall response time of individual sections of a machine or plant, and with a much smaller effect on system complexity. You never even have to start worrying about effects of comms jitter, task latencies, and so on, and you never ever need an IT capable processor to deal with it all, because you use a divide and conquer philosophy and divide problems into more digestible sub-problems requiring less absolute raw speed.

After all, why use remote I/O devices, which must almost by definition be microprocessor based to run their comms stacks? Why not use the microprocessor to take the load off the central processor by performing, say, PID control? The costs are going to be comparable, and smart devices that are physically close to their point of application are generally going to be able to derive more information on it (for example being able to measure fluctuations in mains voltage to electrical heaters and compensate for it) and hence work better.


Tim Linnell (Eurotherm Ltd, but opinions are my own).
 
I would like to know what choice Nilesh Pradhan has made and how things are going. Maybe the webmaster can drop him a line and see if he
can update us.

John
Techni-Systems
 
C
Hi Tim

My working concept is for what amounts to an IO rack running linux. That way it can be run standalone as a PLC, networked as intelligent IO or networked as cooperative distributed processors as the needs dictate. When I say PC, It can mean a machZ on an IO rack or a beowolf cluster. Or, for that matter a Linux instance on an IBM mainframe. Or a dragonball on an intelligent sensor. The means are available _now_ for many, many approaches. The current state of the art for automation is scads of functionally identical but fiercly incompatible platforms. It is left as an exercise for the reader to decide which is more likely to solve these problems. You might say that Paul and I have Linux in common but we are diametrically opposed on centralization. When I speak of decoupling computation and comms from the IO loop, I'm merely describing the more complex functionality available when you have an OS at your disposal. Imagine an assembly line for example, with a dozen local controllers all using the same memory map. At practical IO speeds this is not feasible. If the map is virtualized across a fabric of Ghz Ethernet completely synched between IO cycles, it suddenly makes a lot of other common approches look way too complex and expensive as comms become simply a register operation. We will have to think a little differently to move forward. Or we can continue with what I've found so amusing, a 10 mbit/sec network that gets updated every three or four 3ms.scans by the cheapest available processor. Strange engineering.

Regards

cww


Regards

cww
 
T
Hi Curt

The problem with trying to expand a centralised computational facility driving networked I/O is very clearly suggested by what you say about needing more than a 10mb/s network to work at all. As you say, in order to pull all the data back to a central point, you need a very fast network. You then have to cluster remote I/O functions into single largeish units so as to share the cost of the networking hardware, which therefore require point to point wiring to the point of application (which can be very expensive).

But do you actually need a coherent central map of all I/O in a central server? To me the answer is obviously no - it's the automation equivalent of control freakery, and is a brute force attempt to solve a problem that's better addressed by changing methodolgy in my view. It's much more effective to use smart devices, or smart device clusters, maintaining a sufficient shared image of their local I/O requirements to do the job and passing/consuming only higher level data from local clusters. This is how any complex organisational problem is dealt with in most other fields I can think of - software, hardware, PC usage, even people!

The 'network scan time' becomes a meaningless concept. A central processor does not need to scan all the I/O, but just co-ordinate the operation of individual smart devices or clusters. Local scan times are what they need to be in the local sub-systems, which will by definition be scanning less and so will need less powerful engines. And it's easy to add parallel sub-systems without anything more than a negligible effect on the overall co-ordinator scan time (rather than the linear addition the centralised model requires). Another benefit is that sub-systems can be designed to work without co-ordination, which allows (in the original example) temperatures to be maintained when a line is otherwise down. Because the network is freed from the onerous burden of timely I/O gathering, it can be used for value adding purposes, such as (for example) using SPC to detect tolerance shifts in key measures, and thereby providing predictive maintenance facilities. Sub-systems can be tested in isolation, as individual objects. And they are close to the job they are doing, so can do it better (with less wire!).

This isn't new - anyone using drives or temperature controllers from a PLC using Profibus, DeviceNet, or even Modbus is doing it already. People buy high level devices (from companies such as Eurotherm) because that way they can devolve complex functions into autonomous 'best of breed' devices and forget the details of how they work. There's no fundamental difference between this approach and slaving off small sub-systems based on an appropriate (small or large) PLCs for any given application, and there's really no reason at any level why comms need be anything more than a transparent register operation (this is very much how Profibus DP and DeviceNet work, for example; Modbus tends to work like this in the PLC world).

The benefits of an operating system? I can't see many in this particular context (and I have worked as a Unix systems programmer so I'm aware of both sides of the fence here). It might even be a problem, since the ability to spawn tasks will affect (critical in a centralised model) overall system timing, interrupt latency calculation will be complex because there is a hell of a lot going on, and the ability to use a faulty disk based file system could screw things up completely! I wouldn't necessarily argue against using a Linux box as a central strategy co-ordinator, but where my architecture differs is that it is using remote devices as *function* and data servers, rather than simple I/O devices (which is a waste of their electronic brains, frankly).

I suppose the reason I started into this discussion was that I was getting hot under the collar at the suggestion, I think from Paul Jager, that small devices not using 'IT capable' processors were somehow deficient. This is pure nonsense. By using appropriate levels of smart technology intelligently at each level, you can decrease overall systems costs, and get much better performance with a lower bandwidth (and probably cheaper) networking technology. I agree that we require a change of thinking to move forward to some degree, but much more "strange engineering" to my mind is ignoring 40 years of organisational theory and stepping back into some wierd 1960s analogy of having a central mainframe frantically trying to do everything!

Cheers


Tim
 
C
> The problem with trying to expand a centralised computational facility
> driving networked I/O is very clearly suggested by what you say about
> needing more than a 10mb/s network to work at all. As you say, in order to
> pull all the data back to a central point, you need a very fast network.

I didn't say anything about a centralized computation facility. When I'm talking about decoupling IO scan from computation and networking, I am referring to the advantage of having a full OS on each node. The compute and networking stuff runs at processor speed and syncs with the IO scan through the map. The map is duplicated on each node (not a problem with SOC resources) and is synced at network speed through a switched fabric more or less depending on size and speed, this is automatic so it is a virtual map. I'm fairly sure with today's processors and commodity networks this could be comfortably be accomplished between scans. This isn't rocket science either. When the map has been updated, the scan is scheduled and the cycle repeats. Thus you have many peers rather than one central facility but they can operate as one and have no more than one scan latency for communications through the map. This could be done for much less than say Profibus, with far superior characteristics. For the simple example given it would only require a 12 port Ethernet switch and cabling eliminating collisions and non-deterministic behavior. With even a simple round robin, a fairly large map could be distributed to quite a few machines between scans especially if only deltas are sent. One node could be designated as the master for simplicity or each could send and recive a message from an to each of the others. With deltas this could be probably be done with 100mb/sec commodity networks stuff. But the implementation isn't important. The implications are. The things that PLC's do poorly are now trivial. Any brand or type of node can participate. And the whole mesh can do huge amounts of work in parallel without a central host by simply giving each a part of the job.

You
> then have to cluster remote I/O functions into single largeish units so as
> to share the cost of the networking hardware, which therefore require point
> to point wiring to the point of application (which can be very expensive).

See above.
>
> But do you actually need a coherent central map of all I/O in a central
> server? To me the answer is obviously no - it's the automation equivalent of
> control freakery, and is a brute force attempt to solve a problem that's
> better addressed by changing methodolgy in my view. It's much more effective
> to use smart devices, or smart device clusters, maintaining a sufficient
> shared image of their local I/O requirements to do the job and
> passing/consuming only higher level data from local clusters. This is how
> any complex organisational problem is dealt with in most other fields I can
> think of - software, hardware, PC usage, even people!

Good points. But if every node has the whole map, you don't need any hierarchy. It's just there. Many issues about how to distribute tasks also become trivial. Think about it. The economics are compelling today. And it's doable with today's technology and free software. It's not brute force with SOC class hardware or PC class hardware. It's amall potatoes to keep the map current and scan IO when you have the resources. And the commodity hardware to do this should certainly be competitive with the status quo. Low IO counts would be more expensive but we're used to expensive in this business. And if it were written once and shared the burden would approach zero. Boggles the mind how much senseless duplication is built into the the costs we pay today and how much lower they could be with sharing and cooperation. The cost of a system would be pretty close to the programming and services. And none of it's anything special at all. Not a $130.00 connector in sight. This is the type of technology that obsoleted most supercmputers. Our task is simple compared to a beowolf cluster and those are "off the shelf" from IBM. The only reason we would need to keep the scan cycle is for familiarity.Otherwise event driven or sequwntial programming would do.

> The 'network scan time' becomes a meaningless concept. A central processor
> does not need to scan all the I/O, but just co-ordinate the operation of
> individual smart devices or clusters. Local scan times are what they need to
> be in the local sub-systems, which will by definition be scanning less and
> so will need less powerful engines. And it's easy to add parallel
> sub-systems without anything more than a negligible effect on the overall
> co-ordinator scan time (rather than the linear addition the centralised
> model requires). Another benefit is that sub-systems can be designed to work
> without co-ordination, which allows (in the original example) temperatures
> to be maintained when a line is otherwise down. Because the network is freed
> from the onerous burden of timely I/O gathering, it can be used for value
> adding purposes, such as (for example) using SPC to detect tolerance shifts
> in key measures, and thereby providing predictive maintenance facilities.
> Sub-systems can be tested in isolation, as individual objects. And they are
> close to the job they are doing, so can do it better (with less wire!).
>
> This isn't new - anyone using drives or temperature controllers from a PLC
> using Profibus, DeviceNet, or even Modbus is doing it already. People buy
> high level devices (from companies such as Eurotherm) because that way they
> can devolve complex functions into autonomous 'best of breed' devices and
> forget the details of how they work. There's no fundamental difference
> between this approach and slaving off small sub-systems based on an
> appropriate (small or large) PLCs for any given application, and there's
> really no reason at any level why comms need be anything more than a
> transparent register operation (this is very much how Profibus DP and
> DeviceNet work, for example; Modbus tends to work like this in the PLC
> world).
>
> The benefits of an operating system? I can't see many in this particular
> context (and I have worked as a Unix systems programmer so I'm aware of both
> sides of the fence here). It might even be a problem, since the ability to
> spawn tasks will affect (critical in a centralised model) overall system
> timing, interrupt latency calculation will be complex because there is a
> hell of a lot going on, and the ability to use a faulty disk based file
> system could screw things up completely! I wouldn't necessarily argue
> against using a Linux box as a central strategy co-ordinator, but where my
> architecture differs is that it is using remote devices as *function* and
> data servers, rather than simple I/O devices (which is a waste of their
> electronic brains, frankly).
>
> I suppose the reason I started into this discussion was that I was getting
> hot under the collar at the suggestion, I think from Paul Jager, that small
> devices not using 'IT capable' processors were somehow deficient. This is
> pure nonsense. By using appropriate levels of smart technology intelligently
> at each level, you can decrease overall systems costs, and get much better
> performance with a lower bandwidth (and probably cheaper) networking
> technology. I agree that we require a change of thinking to move forward to
> some degree, but much more "strange engineering" to my mind is ignoring 40
> years of organisational theory and stepping back into some wierd 1960s
> analogy of having a central mainframe frantically trying to do everything!
>
> Cheers

Already way ahead of you simply by thinking outside the box. Visualize this model, really think it through. I think you'll see the enormous benefits gained by simply thinking differently
and throwing cheap available hardware and software at the problem. My concept is nothing like Paul's or anything in the market. It's a clean sheet application of today's technology to large automation. Go ahead guys, tell me why it wouldn't be cheaper, faster and better above a certain point count.

Regards

cww
 
Tim Linnell:
> The problem with trying to expand a centralised computational facility
> driving networked I/O is
...

I'm not sure Curt was actually advocating that sort of centralisation - as I read it, he was more in favour of a flexible, scalable solution that can run equally-well brick-sized or mainframe-sized.

Obviously, this is going to eliminate the very tiniest of solutions - but the benefits of the same flexible software being deployed across the whole floor, from the PC-104 controlling three cylinders and a motor all to the top-level box watching over the whole factory, are likely to outweigh that.

> I wouldn't necessarily argue against using a Linux box as a central
> strategy co-ordinator, but where my architecture differs is that it is
> using remote devices as *function* and data servers, rather than simple
> I/O devices (which is a waste of their electronic brains, frankly).

I think the point is that the remote devices can also run Linux, and be better for it.


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
Top