calibration of distorsion for CCD

J

Thread Starter

Jason

i am working in the field of industrial vision, due to the distorsion of CCD, I can't calculate accurate distance. How to transform image coordinate to world coordinate accurately.

 
J

Johan Bengtsson

If you let the CCD se a grid of lines (ie known world coordinates) and store what pixel equals each intersection. Then you can calculate backwards an interpolation of what each pixel means in world coordinates.

Is your question about to do the calculations or about some other part of the problem?


/Johan Bengtsson

----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: [email protected]
Internet: http://www.pol.se/
----------------------------------------
 
J

Joe Jansen/ENGR/HQ/KEMET/US

Questions:

What level of resolution are you using? ie: what is your scale? 1 pixel
is equal to _____ mm/feet/miles/etc.....

Which dimension are you having trouble reading? X and Y? Or are you
trying to measure depth?

Vision systems I have worked with in the past, specifically Acuity, have
been able to give X and Y coordinates for robot guidance down to 0.5mm
accuracy. Are you experiencing distortion at the edge of the image, or all
the way across? What type of cameras do you use?

Vision is all about "lighting and optics". If these 2 are correct, your
system should work. Try to cheat either of these, and you may as well give
up.

--Joe Jansen
 
M
Measuring distance is relatively simple using a line gauge. First you need to setup a reference scale. Try an object whose exact length is known -- for example a gauge block. Suppose you are using a 2" gauge block. Set up a line gauge along the length of the threshold image -- colors displayed in the threshold image will be either black or white - no shades of gray. Use the line gauge to determine the width of the blob (number of consecutive pixels of the same color). This number can then be used to relate pixel count to actual length. In other words, if your blob width is 160 pixels, then 160 pixels = 2". Be sure to set up a separate line gauge in both the X and Y directions. Hope this helps.

M.R. Brown
[email protected]
 
J

Johan Bengtsson

Well...

If you break down the picture into tetragons (by the grid)
Each tetragon read by the CCD represents a square in real world coordinates.

Now we break down the problem and only look at one of these tetragons and ignore the rest for a while...

Let's define or coordinate system, make x and y the CCD coordinates and X and Y the real world coordinates.
This gives you the following for each tetragon:
xul,yul - XL,YU, upper left corner
xur,yur - XR,YU, upper right corner
xlr,ylr - XR,YL, lower right corner
xll,yll - XL,YL, lower left corner

Now any pixel inside this (x,y) should be transformed into real world coordinates X,Y.

Transforming from real world to CCD is relatively easy:
xa=(X-XL)/(XR-XL); //help number, 0 = left 1 = right
ya=(Y-YL)/(YU-YL); //as above, 0 = bottom 1 = top
x=(xul*(1-xa)+xur*xa)*(1-ya)+(xll*(1-xa)+xlr*xa)*ya;
y=(yul*(1-xa)+yur*xa)*(1-ya)+(yll*(1-xa)+ylr*xa)*ya;

The other way around is more tricky (if someone can find a non-iterating way of doing this I would like to hear about it too...)
Anyway it is solveable with some iteration using the formulas above...

assume xa=0.5, this is obviously not right most of the time but it is a good starting point....

yl=yll*(1-xa)+ylr*xa; //CCD pixel matching real world YL
yu=yul*(1-xa)+yur*xa; //CCD pixel matching real world YU
ya=(y-yl)/(yu-yl); //0 = bottom 1 = top

xr=xlr*(1-ya)+xur*ya; //CCD pixel matching real world XR
xl=xll*(1-ya)+xul*ya; //CCD pixel matching real world XL
xa=(x-xl)/(xr-xl); //0 = bottom 1 = top

Repeat this some times, I would guess repeating 2-5 times gives you a good enough approximation, it should converge quite rapidly.

When you have finished iterating you get the X and Y as
X=XL+(XR-XL)*xa;
Y=YL+(YU-YL)*ya;

You obviously need to do these calculations using floating point numbers, at least for xa and ya, but probably for X* and Y* as well...

Another way of solving your problem is to do a new bitmap already re-mapped to real world coordinates and use that for the vision code, I don't know if this would work since I don't know how the vision code works but it might.

Make a bitmap, at least twice the sice in each direction. For each pixel in this bitmap, use the world to CCD equations above and find out what pixel that is, get that pixel color
and put it in the new bitmap. This bitmap will be a scaled up version of the original and
remapped to world coordinates, you can directly make measurements in this but it will not be so smooth since it will be scaled up.
It might work by not rescaling it too, but some pixels will undoubtedly be scaled up so the problem don't dissapear it is just smaller chance of making trouble - preferably it should
be handled instead.

The first way seems to be heavier for the processor with iterations and so on, but you only have to do this for making measurements in the picture, not for the whole picture, the second way requires you to make the calculations for each pixel - that is a lot of calculations.

Finding out what tetragon you are in is left as an exercise for the reader...

/Johan Bengtsson

----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: [email protected]
Internet: http://www.pol.se/
----------------------------------------


-----Original Message-----
From: MIME :[email protected] [mailto:MIME :[email protected]]
Sent: Friday, July 06, 2001 4:35 PM
To: Johan Bengtsson
Subject: Thanks for you suggestion and I need more..

If you let the CCD se a grid of lines (ie known world coordinates) and store
what pixel equals each intersection. Then you can calculate backwards an
interpolation of what each pixel means in world coordinates.

Is your question about to do the calculations or about some other part of
the problem?


>Dear Johan
> Thanks for your suggestion. I know this way but I can't calculate
>backwards with other pixel which is not the intersecion point. I've ever
>think that I can use interpolation method to calculate these pixels,
>but the intersection point in the image is not linear, how to use
>interpolation. Can you give me more advice to solve my problem? Now I do
>is to measure the size of IC and lead pitch, so I need the results more
>accurately.
>
>Regards
>Jason
 
C

Curt Wuollet

Just a thought that might help:
Intel has released some open source libraries to deal specifically with camera calibration. etc. Even if you don't use them, the methods should save a lot of head scratching.

Regards

cww
 
Top