Control standards and initiatives, was ALL: An (open) announcement

  • Thread starter Matthew da Silva
  • Start date
See replies below:

-----Original Message-----
From: Edelhard Becker <[email protected]>

>Hi together,
>
>i think i have to add my $0.02 here ..
>
>On Fri, Apr 14, 2000 at 09:12:43PM -0400, Phil Covington wrote:
>> ----- Original Message -----
>> From: "Armin Steinhoff" <[email protected]>
>>
>> > I'm not convienced ... IMHO a clean implementation for a UNIX
>> > system work more reliable than any Windows based implementations.
>>
>> What evidence do you have that Windows based implementations are not
>> reliable? My first hand experience says that you are wrong...
>
>don't know where you got your experience, but i never found a reliable
>Windows machine. In contrary: which family of Operating Systems is
>famous for its "Blue Screen of Death"? And which one for running large
>scale servers?

I find the above statement unbelievable. My company builds machine and process control systems comprising integrations of PC and PLCs.
The PCs all run NT 4.0. Normally run 24/7. Two classes of apps in each system MMI (CiTect or Winderware) and raw material optimization.


>Windows (including 2000) is _by_design_ a Desktop-OS for Personal
>Computers, which means: switch the PC on, do some calculations in
>Excel, write a letter, print it and switch the PC off again. Unix is a
>multi-user, multi-tasking OS, which means: run the OS endlessly and
>serve as many users and as many processes as the system allows
>(depending on CPU power and memory size). Therefore it has a
>completely different design.

I don't agree that the above is usage model of industrial apps. In my office I use it daily for 8-10 hours or more. In the mills the stuff runs 24/7.


>At the University, we had SGI IRIX, HP-UX and Linux Systems and all of
>them run day and night for months without rebooting (usually until the
>building's electricians test the power system and simply shut off the
>mains :-/

They don't have a monopoly on reliability.

<remainder clipped>
 
W

Wallinius Mattias

For this kind of input look at Don Box nice little document on MSDN website. COM has no more conceptual weaknesses than CORBA and for the information on inheritence I would like to say
that the subject of inheritence is far more than
Specialization/Generalization that Armin refers to. Inheritence can be implemented by containment or aggregation. To my recollection I can't see in any specs that CORBA supports specialization/generalization either but CORBA and COM isn't about mainly about this they are about components and as we are all very well aware of all models makes tradeoffs, even Java.
Last I must admit that phil covington has a point in saying that COM is rejected merely because that it comes from MS. It's a way of talking to MS systems, let's use it. MS dominates large parts of the market. Let's use their tools if nothing else. I think that one should be very careful when rejecting technology that one perhaps
have little knowledge of. If we wan't to be religious we can always go to church.

/Mattias
 
A

Armin Steinhoff

At 18:10 21.04.00 -0400, Wallinius Mattias
<[email protected]> wrote:

>For this kind of input look at Don Box nice little document on
>MSDN website. COM has no more conceptual weaknesses than
>CORBA and for the information on inheritence I would like to say
>that the subject of inheritence is far more than
>Specialization/Generalization that Armin refers to. Inheritence
>can be implemented by containment or aggregation.

The point is ... inheritance is NOT implemented.

>To my
>recollection I can't see in any specs that CORBA supports
>specialization/generalization either but CORBA and COM isn't
>about mainly about this they are about components and as
>we are all very well aware of all models makes tradeoffs, even Java.
>Last I must admit that phil covington has a point in saying that
>COM is rejected merely because that it comes from MS.

Sorry .. it is merely rejected because it is a proprietary sliding 'standard' ( ->COM+ ... do you have a spec ??)

> It's a way of talking to MS systems, let's use it. MS dominates large parts of
>the market. Let's use their tools if nothing else.

Why should I spend money ... when there are better toolchains for free ??

>I think that one should be very careful when rejecting technology that one perhaps
>have little knowledge of.

One should be very careful to use 'technology' which is based merely on marketing ...

I see no problems to left out the Palm PC/Windows CE 3.0 technology ... because that
technology doesn`t allow to stop running programs :))

>If we wan't to be religious we can always go to church.

It's up to you to believe on the 'MS church' or not.

Regards

Armin Steinhoff
 
R
>WinNT/2000 is a possibility,
> but I not that confident of it's reliability. That leaves Linux, which is
> reported to be very reliable. My experience with Linux is that is very
> solid. Therefore, I think that I should choose linux.

My experience is that NT and Linux both approach the rock solid robustness we expect of systems such as AIX and SOLARIS (noting that the latter systems are on generally better (and more expensive!) hardware).

But I have yet to know a system that never gives any problems. For me, the difference between Linux and NT is that with Linux I can identify isolate and resolve the problem. With NT I generally cannot. Not much fun when you have to
say to the customer 'try reinstalling the OS from scratch', everybody knows that cure!
 
G

Gilles Allard

Phil Covington wrote :

> Edelhard Becker <[email protected]> wrote:
>
> > Windows (including 2000) is _by_design_ a Desktop-OS for Personal
> > Computers, which means: switch the PC on, do some calculations in
> > Excel, write a letter, print it and switch the PC off again. Unix is a
> > multi-user, multi-tasking OS, which means: run the OS endlessly and
> > serve as many users and as many processes as the system allows
> > (depending on CPU power and memory size). Therefore it has a
> > completely different design.
>
> Windows NT is not as unreliable as many *nix advocate make it out to be.
> I have had very positive experiences with NT/2000. I have had positive
> experiences with Unix and Linux also. I know of others that have not had
> positive experiences with Linux. Armin made a very generalized statement
> concerning Windows reliability. I could easily make other generalized
> statements about Unix or Linux, but I am sure a OS flame war would ensue
> that the moderator would quickly stamp out... :)

From my experience, Win9x and NT4 works most of the time. However my average uptime for NT is around 1 month while the average uptime for HPUX is 9 months. For HPUX the downtime was caused by a kernel upgrade (a planned operation) and for NT it is casual (during night or weekend). Which one do you prefer? Operation for 24X7 is different than 8X5.

Gilles
PS: I can trust Linux has a 24X7 capability (even if I do not have experience with Linux); however I've never seen a very reliable NT4
implementation.
 
W
I have a friend who is the webmaster (technical engineering manager/hardware) for a top-ten-site dotcom. We were discussing Windows NT2000 (for which he was a beta site). He says that in his opinion it is not that NT is unreliable, it is actually that most implementers don't bother to properly tune it for reliability. He says that a properly tuned server application should be just like that damn pink bunny....

Walt Boyes
 
R
Phil Covington wrote:

> To attack COM because the idea originated at MS is IMHO "wrong".

From a philosophical point of view, I would agree with that. But COM/DCOM are still very much MS siblings, not just an initial idea. Ms are still in a position to change things and be sure that the industry will be required to follow.

Merge that with the reputation MS has for making 'amendments' that are not necessarily in the best interests of the user, and it appears a very shaky
situation indeed.
 
E

Edelhard Becker

Hi,

On Fri, Apr 28, 2000 at 12:25:40PM -0400, Bill Code <[email protected]> wrote:
> See replies below:
>
> -----Original Message-----
> From: Edelhard Becker <[email protected]>
>
> > Hi together,
> >
> > i think i have to add my $0.02 here ..
> >
> > On Fri, Apr 14, 2000 at 09:12:43PM -0400, Phil Covington wrote:
> > > ----- Original Message -----
> > > From: "Armin Steinhoff" <[email protected]>
> > >
> > > > I'm not convienced ... IMHO a clean implementation for a UNIX
> > > > system work more reliable than any Windows based
> > > > implementations.
> > >
> > > What evidence do you have that Windows based implementations are
> > > not reliable? My first hand experience says that you are
> > > wrong...
> >
> > don't know where you got your experience, but i never found a
> > reliable Windows machine. In contrary: which family of Operating
> > Systems is famous for its "Blue Screen of Death"? And which one
> > for running large scale servers?
>
> I find the above statement unbelievable. My company builds machine
> and process control systems comprising integrations of PC and PLCs.
> The PCs all run NT 4.0. Normally run 24/7. Two classes of apps in
> each system MMI (CiTect or Winderware) and raw material
> optimization.

so here are some numbers [side notes see end of mail]: the german computer magazine c't measured the top-100 [1] webservers' [2] availability during a period of 32 days [3]. First hint: see how web administrators decide:
58 run Solaris + Apache
29 run Linux + Apache
10 run Windows NT4 + IIS

The mean downtime (in %) for domains consisting of a single server was:
0.3 Solaris
0.2 Linux
1.6 NT
which is a _factor_ of 8 between Linux and NT!!

The mean number of downtimes was:
5.9 Solaris
6.1 Linux
15.5 NT

The mean length (in minutes) of a downtime was:
25 Solaris
13 Linux
46 NT

These numbers show values for the different operating systems in the "field" when being administrated by the normal staff (and no optimized values from a vendor).

> > Windows (including 2000) is _by_design_ a Desktop-OS for Personal
> > Computers, which means: switch the PC on, do some calculations in
> > Excel, write a letter, print it and switch the PC off again. Unix
> > is a multi-user, multi-tasking OS, which means: run the OS
> > endlessly and serve as many users and as many processes as the
> > system allows (depending on CPU power and memory size). Therefore
> > it has a completely different design.
>
> I don't agree that the above is usage model of industrial apps. In
> my office I use it daily for 8-10 hours or more. In the mills the
> stuff runs 24/7.

That exactly is the reason why IMO NT is _not_ appropriate for such a use. It still is a Desktop-OS for PCs.

> > At the University, we had SGI IRIX, HP-UX and Linux Systems and
> > all of them run day and night for months without rebooting
> > (usually until the building's electricians test the power system
> > and simply shut off the mains :-/
>
> They don't have a monopoly on reliability.

But they have much much more experience in building OSses.

> <remainder clipped>

Sincerely,
Edelhard

[1] Top-100 sites according to number of hits per day, from an association, that evaluates media for advertising.
[2] A webserver is something different than an industrial control application, but the target is the same: run a dedicated app, possibly under high load, as reliable as possible.
[3] Network and DNS problems were _not_ counted.
--
s o f t w a r e m a n u f a k t u r --- Software, that fits!
OO-Realtime Automation from Embedded-PCs up to distributed SMP Systems
[email protected] URL: http://www.software-manufaktur.de/
Fon: ++49+7073/50061-6, Fax: -5, Gaertnerstrasse 6, D-72119 Entringen
 
H

Hullsiek, William

This whole discussion on Windows-NT and Unix, Linux, QNX is very interesting.

Our Windows-NT server used for our MES / SQL-Server, runs 24 x 7, and has no down time since it was installed in Feb 1999.

From my experience, you achieve your 99.99 % uptime by protecting equipment with UPS, dual power supplies, plenty of ECC memory, hot-swap hard-drives, top-rated equipment on the compatibility list.

Our problems on the Windows-NT side, has largely been due to poorly written communication drivers, or from software engineers who have not been adequately trained. I almost took the system out one evening when I accidentally erased portions of the database (being rushed and tired).

I think training and experience with the operating systems, languages, and tool-sets has more to-do with the up-time than the operating system. (Yes, People make a difference). For more information read Peopleware by Lister and DeMarco.

If your team is trained in Unix, POSIX-like, C, and that tool-set, you will probably have better up-time with Linux and QNX.

Teams composed of MSCE, MS-DBA, and MSCD (Microsoft certification designations), can achieve the same result with Windows-NT.


William F. Hullsiek
 
T
Walt,

Did he say "how" to tune it for reliability? (Remember the old DOS "Increase Number of files = 30" fix and just what a difference that made to performance. How do you "fix" NT? Why do we even have to mess with it?

Tony Firth, Electrical Eng.,
Quester Technology Inc.,Fremont, CA
 
W
Bill Hullsiek said it real well this morning:

>Our Windows-NT server used for our MES / SQL-Server, runs 24 x 7, and
>has no down time since it was installed in Feb 1999.
>
>From my experience, you achieve your 99.99 % uptime
>by protecting equipment with UPS, dual power supplies,
>plenty of ECC memory, hot-swap hard-drives,
>top-rated equipment on the compatibility list.
>
>Our problems on the Windows-NT side, has largely been due to poorly
>written communication drivers, or from software engineers who have not
>been adequately trained. I almost took the system out one evening
>when I accidentally erased portions of the database (being rushed and
>tired).
>
>I think training and experience with the operating systems, languages, and
>tool-sets has more to-do with the up-time than the operating system.
>(Yes, People make a difference). For more information read Peopleware by
>Lister and DeMarco.
>
>If your team is trained in Unix, POSIX-like, C, and that tool-set, you will
>probably have better up-time with Linux and QNX.
>
>Teams composed of MSCE, MS-DBA, and MSCD (Microsoft certification
>designations), can achieve the same result with Windows-NT.

My friend who runs the Top Ten website also points out that if you don't properly set up a *NIX system (pick your -NIX) they crash and burn with the same regularity that NT systems that are not well set up crash and burn.

In the final analysis, it isn't the OS it's the Operator.

Walt Boyes

---------------------------------------------------------------
Walt Boyes -- Director of New Business Development
Branom Instrument Co.-- P. O. Box 80307-- 5500 4th Ave. So.
Seattle, WA 98108-0307
Phone: 1-206-762-6050 ext. 310 -- Fax: 1-206-767-5669
http://www.branom.com -- http://www.branomstore.com
mailto:[email protected]
---------------------------------------------------------------
 
M

Michael Griffin

I *really* don't want to get into the middle of a "Windows vs. Linux" debate, but I would like to ask a simple sort of technical question while you guys are talking about Windows. I am willing to accept that Windows NT can be fairly reliable if it is properly set up by someone who really knows what he is doing. The question is, what are my chances of ever finding someone like that? Or perhaps closer to the point, what would it take for me to be able to do that?

I'm prepared to admit that I'm not a "webmaster for a top-ten-site dotcom", but then I'll bet I know a few things about my own field that your
friend doesn't. My problem though is that there are certain applications for which I need the characteristics of a PC - i.e. mass storage, monitor, keyboard, networking, CPU speed, etc.

However, I don't want to install something that I can't maintain. What sort of background does it take to be able to create a reliable Windows
NT system? A couple of night courses at Fanshawe College? Unfortunately, all the unreliable Windows NT systems that I see were set up by certified professionals, so I'm not too sure just how much good that would do me. You said that "most implementers don't bother to properly tune it for reliability", so it doesn't look like paying someone else to do it for me is going to do much good either.

We just put in for our preliminary capital budget for next fiscal year, and we have some money in there for implementing replacements for some
existing test systems (an improved method). I've got to start sourcing some reliable computer hardware, data aquisition boards, development software, and an operating system. Right now the field is entirely open; we haven't ruled out any options including whether or not to use Windows NT.

I'm pretty sure I know how to write reliable application software (I've never had any complaints in that regards). The question is though, how do I get a reliable operating system, properly installed? Or perhaps, is this just a hopeless task in today's world?

**********************
Michael Griffin
London, Ont. Canada
[email protected]
**********************
 
C

Curt Wuollet

Perhaps, except my Linux box is protected by being a no-name clone with commodity power supply, 16 mb of off the shelf pc100 ram, and an
eclectic collection of hdd's, none of which was selected for the task. It was passed down for me to hack on when it wouldn't reliably run NT. In addition to being my "desktop" it works as a server for a laser printer and as a depository for all the documentation and drawings we don't
dare keep on the NT machine used for Autocad which machine is on it's fourth fresh reload of NT. I don't do much administration on it, just add
another drive when it gets full. I do have to shut it down for that. I thought it had a problem once when the power failed over the weekend but, I had left a floppy in the drive and it was simply trying to boot from that. This is not how I would recommend setting up a mission critical machine. The point is, it has become important because it is the only machine in the company that hasn't required reloading, rebooting or replacement. It started life as a cast off extra "beater" machine, given a reprieve by running RH5.0. I doubt very much that you could even load Win2k on it. This week, in honor of it's long and faithful service I will load RH6.2 on it and I found a couple more simms. A shot from
the air hose, 30 minutes of scheduled downtime and it should be good for another two or three years. That's from "bad" hardware that was
fixed by changing the OS. That makes me a little skeptical. We have other Linux boxen around that share the heritage of being troublesome with Windows. All that will be in the past, there are only a half dozen Windows machines left and they too will eventually go away or will be "fixed" with Service Pack 6.2 by RedHat.

Regards

Curt Wuollet,
Linux Systems Engineer
Heartland Engineering Co.
 
-----Original Message-----
From: Edelhard Becker <ebecker@SOFTWARE-
MANUFAKTUR.DE>

>On Fri, Apr 28, 2000 at 12:25:40PM -0400, Bill Code <[email protected]> wrote:
<clip>
>> I find the above statement unbelievable. My company builds machine
>> and process control systems comprising integrations of PC and PLCs.
>> The PCs all run NT 4.0. Normally run 24/7. Two classes of apps in
>> each system MMI (CiTect or Winderware) and raw material
>> optimization.
>
>so here are some numbers [side notes see end of mail]: the german
>computer magazine c't measured the top-100 [1] webservers' [2]
>availability during a period of 32 days [3]. First hint: see how web
>administrators decide:
> 58 run Solaris + Apache
> 29 run Linux + Apache
> 10 run Windows NT4 + IIS
>
>The mean downtime (in %) for domains consisting of a single server
>was:
> 0.3 Solaris
> 0.2 Linux
> 1.6 NT
>which is a _factor_ of 8 between Linux and NT!!
>
>The mean number of downtimes was:
> 5.9 Solaris
> 6.1 Linux
> 15.5 NT
>
>The mean length (in minutes) of a downtime was:
> 25 Solaris
> 13 Linux
> 46 NT
>
>These numbers show values for the different operating systems in the
>"field" when being administrated by the normal staff (and no optimized
>values from a vendor).


My personal experience would motivate me to question the data. Like I said, I run NT everyday as a development platform. And the systems we build while running NT? They also run on NT, 24/7 in mills.

>> > Windows (including 2000) is _by_design_ a Desktop-OS for Personal
>> > Computers, which means: switch the PC on, do some calculations in
>> > Excel, write a letter, print it and switch the PC off again. Unix
>> > is a multi-user, multi-tasking OS, which means: run the OS
>> > endlessly and serve as many users and as many processes as the
>> > system allows (depending on CPU power and memory size). Therefore
>> > it has a completely different design.
>>
>> I don't agree that the above is usage model of industrial apps. In
>> my office I use it daily for 8-10 hours or more. In the mills the
>> stuff runs 24/7.
>
>That exactly is the reason why IMO NT is _not_ appropriate for such a
>use. It still is a Desktop-OS for PCs.
...<clip>


The stuff we build runs 24/7 on NT in industrial plants.
 
L

Leon McClatchey

Hehe, I remember the dos fix:) Meanwhile, I've got a solution for NT, its called Linux:) Especially now that more applications are
becoming available for Linux, Also, from what I've played with both on the job and at the house, Linux is proving to be the most stable
O.S. In many ways. As well as being very easy to configure (tune if you will). Also, what are these rumors I've been hearing about Linux making advances into realtime applications?

There appears to be something to be said for open source systems as opposed to proprietary systems?

cya l8r
Leon McClatchey
mailto:[email protected]
Linux User 78912 (SuSe62 Box)

Math is like love -- a simple idea but it can get complicated.
-- R. Drabek
 
J

Jack Gallagher

This whole subject is getting old. If you don't like a product don't use it. I am not going to spite myself because of a bad experience with a
product. If the product matures and is the best fit for the job, I will use it. Could be Windows, could be Linux, could be anything. I just like to work in the field of software and systems. Why can't that be enough? Do
you people really think that your individual opinions are going to changethe whole software industry to use one product? PLEASE!

Jack Gallagher
Lead Software Engineer
SESCO (a subsidiary of HARMON Industries)
 
R

R A Peterson

Curt Wuollet <[email protected]> writes:

<< All that will be in the past, there are only a half dozen
Windows machines left and they too will eventually go away or will
be "fixed" with Service Pack 6.2 by RedHat. >>

I have an interesting anecdote to relate regarding MS Win98.

I have a nice PIII NEC computer I bought last summer that has developed a very strange problem.

When I try to run freecell, I get the hourglass cursor as if it is waiting for something to happen but it never gets past this.

I sent an email to NEC tech support describing this problem that was not fixed by the following tactics:

1) installing all the win98 upgrades from the ms web site
2) reloading win98
3) reformating hdd, reloading win98 and all application s/w
4) buying win98 version 2 and installing over original win98 that came with machine

I just got an email back saying that this is a known bug on some win98 machines and is caused by an interaction between AOL Instant Messenger and
Win98 that MS and AOL have not been able to fix. They did not enlighten me further on why an interaction with a program that is not even running would have any effect whatsoever on another program.

Bob Peterson
CM#1412 ANA#R-182415
 
R
On Wed, 03 May 2000, you wrote:

> > If your team is trained in Unix, POSIX-like, C, and that tool-set, you will
> > probably have better up-time with Linux and QNX.
> >
> > Teams composed of MSCE, MS-DBA, and MSCD (Microsoft certification
> > designations),
> > can achieve the same result with Windows-NT.

I can almost agree with this, in the sense that the expertise of the team counts for a lot. But I feel you cannot put them on equal terms because you can only go so far with NT. Just read MS certifaction matererial to find out why.....it is all:

To achieve this you do this......

The concept that one should actually know or understand the workings or logic behind it all is alien to the NT world, so when things do not go as per the book, you are left with the option of re-installing, trying different hardware configs etc. until something does work.

Let me give a recent pair of NT examples:

I had to dial into an NT workstation box and make a PPP connection. As I had no idea how to do this, and online documentation was not much help, I had the help of an NT engineer. He was armed with all his texbooks (including official MCSE
course books). Our attempts to connect failed, and frustratingly we found no indication as to why, everything kept going round in circles. After spending all day on the problem we had to resolve by plan B....do it another way.

In a second example, an NT machine had been loaded up with Autocad and other software before being connected to a network. When the electricians finally made the connection to this new office, the machine refused to be configured
for networking. The guy who supplied the machine (who was not a qualified NT man) struggled for a couple of hours before saying we would have to re-install everything from scratch. This was not a nice option as Autocad had been installed along with a package for electrical designs, and quite a lot of installation and configuration work would be lost. I persuaded him to contact his consultant, somebody with a full set of MS qualifications......the result was he suggested re-installing from scratch.

No lets see how things can be under open source. I was having trouble with a serial interface, which would only work if it was the only active app. Clearly some type of buffer problem. A bit of digging around, and I found that the serial device was reporting itself as a 16550, and yet was not being set up with a FIFO because these can be buggy, in fact the serial howto told me that only the 16550A would be configurable with the 16 byte FIFO because of buggy 16550 FIFO implementations. Well, perhaps mine was not buggy, I want to try...

Well, just look under :

/usr/src/linux/drivers/char/serial.c

and we find the following code:-
/*
* Here we define the default xmit fifo size used for each type of
* UART
*/
static struct serial_uart_config uart_config[] = {
{ "unknown", 1, 0 },
{ "8250", 1, 0 },
{ "16450", 1, 0 },
{ "16550", 1, 0 },
{ "16550A", 16, UART_CLEAR_FIFO | UART_USE_FIFO },
{ "cirrus", 1, 0 },
{ "ST16650", 1, UART_CLEAR_FIFO | UART_STARTECH },
{ "ST16650V2", 32, UART_CLEAR_FIFO | UART_USE_FIFO |
UART_STARTECH },
{ "TI16750", 64, UART_CLEAR_FIFO | UART_USE_FIFO},
{ 0, 0}
};

Well that was not too hard to understand, or fix (at least for my case).

(BTW, can anybody explain why, when I suggest that such accessibility is a GOOD THING, I get flames accusing me of being a communist? I can't see the connection).

At the end of the day the
rip-up-and-redo-with-something-different-till-it-works approach is valid for home users perhaps, and maybe for small systems. But with modern systems becoming ever more complex, proffessional systems engineers MUST be able to understand some of what is going on under the hood, and be able to poke at it when necessary.

This is especially true of industrial automation, where we are frequently doing things that are a far cry from the personal computing/EDP applications for which many of the systems we use are designed.

But I would also note two other factors. One is that system reliability is invariably entwined with system complexity. People all too often say that problems on NT are only due to buggy device drivers or erroneous applications. This is not an excuse, in fact it is inevitable for all systems. The actual kernel of an OS is actually very small (on all OS's). Strip out the device drivers and there is precious little left, that is the way it should be. However, if it is very difficult to write device drivers for the system (and it is notoriously difficult for NT/W2K, that is why even major companies such as HP have trouble getting them out), then system reliability will be degraded.

Likewise applications. The Win API with all it's various extensions is a pretty volatile animal, and seems to be done on the principle of look what others have done and negate it. Nor is it consistent, it varies over the various versions
of windows. As everbody knows, MS add more and more 'features' and force use to continually upgrade to them. The MS answer is to use thier rad tools, and forget about the underlying system.

But look at this list, recently somebody
asked a good question, "how do I access serial ports under NT". It is a good question because they are different under NT than they are under W9x, or WinCE, or 3.1 come to that. Somebody replied "Buy VB". Yet almost every day we read
peole asking about the finer points of serial interfacing code to Modicon, 3964, or whatever. Let's face it, RAD tools are not the right tool for such jobs.

Basicly MS force us into using ever more gadget ridden systems, irrespective of wether we need those gadgets, and this directly impairs reliability.

The other factor is that of scalability. This term has become very misused of late. What it traditionaly means is that one can migrate applications between smaller and bigger platforms, so that you can use the simplest system required, or the largest demanded. It primarily implies portability. Interestingly, back
in the NT 3.51 days, MS agreed with this, but that was because they were basing NT on a microkernel so that it could easily be moved across different types of platform.

That portability seems to have been lost along the way, NT, and W2K, only work only a very limited range of hardware that is far too complex for many requirements (reliability suffers, not to mentions costs), whilst it is not able to run on the biggest platforms that some apps require (admittedly things like amazon, hotmail and the IRS, out of our ball park). But the small end is
an issue, very many automation apps would be quite happy to run on a low cost flash based based cigarette box sized box, and they would be more reliable.

On the other hand, some technologies such as image recognition require the efficient use of huge areas of memory. The hardware vendors are delivering the goods, I can have 64-bit based boxes with GB's of memory for reasonable sums,
but MS can still cannot use them.

A lot of MS diehards do not realise how unscable the win API actually is, pehaps because they are not aware of how the alternatives are. It is true that I can run Linux on a PDA, or in full 64-bit mode on a mini computer, or on a 390 mainframe using the same API, but that is only the tip of the iceberg, the truth is that just about all modern OS's have a very similar API and system
layout, so moving apps between, say, QNX, AIX, Solaris, BSD, and even BeOS, is actually very easy, indeed the differences between the unices from little embedded ones up to supercomputer ones is less than the differences between w9x
and NT. Let's not talk anout WinCE, it is another animal and now into its third major revision.

So, at the end of the day, it is true that the whole team must be suitably proficient on the platform to be used, and it is true that many apps can be implemented or under NT or under Linux, please do not pretend they are the same. I can transport a load of boxes on the back of a pick up, or in a station wagon, that does not make them the same type of vehicle. I use a range of OS's, and consider myself sufficiently competent to comment. Please do not flame me if you have only ever used one platform, go study, then comment.
 
R
On Wed, 03 May 2000, you wrote:
>Also, what are these rumors I've been hearing about
> Linux making advances into realtime applications?
>

They are not rumours, Linux has been doing realtime for years.

It used to be a bit guruish, as you had to collect various patches for the kernel and gather some special libraires for the compiler and, basically DIY, but it has been used for some pretty spectacular applications, such as a group
of research students from PISA that used Linux in a system that automatically drove a car 2000Km along public roads (Italian ones at that!).

Now there are ready made commercial distributions specially tailored for real-time embedded systems that make life a cinch, allthough the DIY packages
have also got much easier. As realtime embedded systems are often very small (and simple), many people prefer to roll thier own systems. A great feature of Linux is that many distributions exist, everybody as heard of the general purpose distributions such as RedHat, SuSE and Caldera, but there are actually well over 50 distributions out there. The lesser known ones are generally
tailored for specific applications (router, single disk recovery floppy, secure e-commerce servers, and, of course, realtime and embedded systems). The advantages of these specific systems is that they cut out a lot of unneeded
stuff and in a single install give you a system allready set up for specialist applications.

Realtime Linux tends to exist as part of the embedded Linux community, which also deals with topics such as using Linux on miniture hardware, running from flash memory, booting quickly, as well as drivers and interfaces for things like touch screens and other wierd hardware of the sort that is also likely to interest people on this list.

For links, you may like to start looking at http://www.linux-embedded.org


> There appears to be something to be said for open source systems
> as opposed to proprietary systems?

They are not less problematic per se, but you can get to the root of the
problem and nail it down. Period.

BTW, Windows was born out of desktops, Linux out of servers. IMHO, Linux is not as good at the desktop as Windows, but do not be put of by the fact that it is not as easy to e.g. wrte a letter on it as it is on windows, it excells in
simplicity and capability when doing servers and embedded systems.
 
Top