Linux Device Drivers and Kernel Version relationship


Thread Starter

Anand Iyer

Hi List Members,

I had been installing Linux on a Automation system. The vendor has given a procedure for detecting the I/O module and there are several drivers for different kernels. When we did insmod, we found that the module was not installed because we had a newer kernel. We had 2.2.16 as against 2.2.12 required by the driver.

This had brought up an interesting point which is the relationship between device drivers and kernel versions. Basically, I believe that the drivers should get installed on higher
revision numbers. The perspective attains higher significance because, In case there is a gaping security hole in a kernel, And it is rectified in a later revision, then the user may not be able to repair his security hole in case the drivers do not work with the newer kernel.

Any ideas on how to circumvent this problem (other than getting the required kernel 2.2.12) or any other comments, similar observations most

Joe Bouchard

When you "make xconfig" a kernel before compile, there is an option which basically says "check for module versions?" Turning this off may
solve the problem (and lead to other problems????).

Hope that helps,
Joe Bouchard

Peter Wurmsdobler

as a quick shot, three solutions:
1. Recompile the module if you have the source code.
2. Ask the company you have the board from to recompile it for your kernel, or to give you an advice, or the source code ;-)
3. "insmod -f" may work, but may hang your system also, or may not; try and see.

Curt Wuollet


Try looking up CONFIG_MODVERSIONS. This makes modules less dependent on kernel version. It'll help if the API's haven't changed. There are
other things that help. /usr/src/Linux/Documentation is a good place
to start. The modules howto will help also Search with Google for Linux Documentation Project. If you have source you can simply adjust the versioning if nothing you need has changed between two minor point releases. I usually simply as the maintainer for the updated version. Most are very helpful if you treat them with respect. i.e. Don't rant. If it is a closed source driver, you're on your own. No one can help much except the people with the source.



Rokicki, Andrew

I think one way to get around this is recompile the kernel with version checking turned off I think it is one of the first ones on the menu.
Type "make menuconfig" (in /use/src/linux??? make sure you are root) to enter configuration.
But I guess if you going to recompile the kernel might as well update the kernel.
See for some how tos.

Greg Goodman

a few suggestions and observations:

1. rebuild the driver (kernel module) for the new kernel, if you've got the source

2. ask the vendor to rebuild the driver for the new kernel, if you don't have the source

3. if you don't have the source and the vendor isn't willing to support the driver for new kernels, start looking for an alternative driver. eventually, you're going to need it.

4. you may be able to force the install (insmod -f), but you take a risk. if the newer kernel is incompatible with the old in some way that matters to the driver, you can hose your system.

kernel rev levels change because the kernel itself changes; only somebody familiar with the driver implementation and the nature of the kernel mods can tell you for sure whether the combination will work or not.

ask the author/vendor whether the driver should work if forced into a 2.2.16 kernel.


Greg Goodman

Ranjan Acharya

Not trying to push Windows or anything here, but since any Windows / Beast from Redmond thread always has a Linux posting, why not!

Doesn't all this Kernel stuff sound remarkably similar to DLL Creep and Service Pack Hell with Windoze?


Commercial Reality.

Curt Wuollet

Not really,

_You_ can't do anything about it in Windows. A problem I can fix isn't that much of a problem. And no one will demand that you upgrade or prevent you from doing a low cost/no cost fix. Or make you reload. If there is a problem it would be if the driver is closed. Then you're bent over the same old barrel. I've never had to ask twice for an OSS driver to be upgraded. If need be, I can do it myself or ask someone who can. The last time I asked, it was less than an hour and I was embarrased because the author apologized. And while some odd Linux apps might require a library that isn't standard on your distribution, they won't simply replace the standard one and break everything else that uses it. It's not very similar at all in my experience. The source code and cooperation is the solution to the problem even with a kernel that is rapidly evolving. And speaking of service packs, it's real handy to be able to apply patches individually and selectively rather than hitting the button and hoping for the best. So far, in almost ten years of living with Linux there hasn't been a patch I've _had_ to apply as very few actually matter for the automation stuff I do. If it ain't broke you don't have to fix it. At home, I play with development kernels and experimental patches. For production I use the stock distribution. It's worked well for me so far. I'm in control of what I use. It's a situation I manage for the least problems. If I did have to apply a patch, I can see exactly what it will do and what it will affect and make the decision. No blind man's bluff.


Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned Industrial Automation Software For Linux. Day Job: Heartland Engineering, Automation & ATE for Automotive Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to Linux.

Greg Goodman

The economic reality - in any field, using any business model - is that things stay broken until the cost of leaving them broken is greater than the cost of fixing them. What changes from field to field and model to model is how you measure cost, who bears the costs, and who makes the decision to fix what's broken.

In the Open Source world, problems usually don't last long because, for somebody out there, the cost to fix the problem is lower than the inconvenience of leaving it broken. In the proprietary world, things stay broken longer (sometimes _much_ longer, occasionally forever) because the cost to fix (and re-release) a problem is much higher, and the cost of leaving it broken much lower.

Why is the cost to fix higher for proprietary software? Because the pool of people who _can_ fix a problem is much smaller, and they are typically already dedicated to other tasks already justified by a business case. Until the cost - to the company, not to an individual user - is high enough to justify diverting resources to deal with it, it won't happen. And cost to the company is measured in lost sales, lower market share. Most bugs, though troublesome to the user, don't really threaten the company's market position. The inconvenience a user must suffer from a broken piece of software to make him abandon the software for a competitor package is typically much higher than the level of inconvenience that will motivate him to fix it, if fixing it is something he can do. (Especially if fixing it raises his standing in the community.) So the risk to the company is low, for all but the most critical bugs; a given problem gets prioritized and scheduled, then fixed and released, based on the value to the company.

None of this is intended to disparage sound and time-honored business practice and project management methodologies. It is intended to illustrate that, in some circumstances, the Open Source model can serve the end user better. The grail we in the OSS community are looking for is the means to serve the end user as well as Open Source does, while making enough money to ensure that we can continue to serve him at all.

My two cents,

Greg Goodman
Chiron Consulting