Somewhat high grade design?
I'm one of a few developers at my company, and i'm paid a very modest salary. This frequently offers me the privledge of complete solution and design ownership. Unfortunately it also means i have to make it, and am responsible for all the issues with its design AND implementation.
But I was thinking this is some pretty good stuff. Does anyone think it's of a quality that could be sold at a contract rate? When I read it back i think to myself this is a $1000 email. If they want me to make it, that's another $5000 but this is theirs. Shop it to India..
An email I recently wrote:
Subject: Machine vision 2.0 and what i know
Hardware:
Need the ability to grab image and deliver to [software] as a bitmap. This will involve some type of frame grab hardware and a communication level which may be USB or CAN to the controller, or Ethernet provided we add a hub in the controller box (this would create two IPs on the network, may have issues finding the grabber devices.)
Software:
The goal of the system will be to compute the XY and rotation offset from the camera housing to the tube sheet. The XY and Rotation (referred to as XYR from now on) will be relative to a coordinate system placed anywhere on the tube sheet. A good example would be to have the coordinate system centered over a tube, with the X axis pointing in the primary axis, and the Y in the secondary.
If the camera were centered over a tube XY would report zero or any multiple of the pitch. The XY value modulated by the pitch will always report the same value regardless of which tube is chosen as the coordinate system origin. (It will vary from frame to frame depending on which tubes are visible).
The first step in obtaining these values would be back projecting the image through perspective correction. This will undistort the image from artifacts caused by the focal length. This projection functionally can be obtained from the OpenCV computer vision library.
The second step would be detecting a subset of visible tubes with which we have a high confidence in their location relative to the camera. We need at least 3 tubes to provide the XYR information. The more tubes we have the more we can validate our solution. I would say a minimum of 4 is a good bet, so that we have at least 1 extra tube to verify our solution.
Next, 3 tubes will be chosen from the subset, the coordinate system will be placed on one of these tubes. By some computation, probably law of cosines, the origin basis vectors will be chosen. Once the tube sheet coordinate system has been established in the image, the XYR information is readily available.
Given the XYR location of the camera relative to the tube sheet it would be possible to compute the deflection of the guide tube with high accuracy. This is because there is no slop between the camera and the guide tube and the exact relative orientation between the two can be measured or calibrated.
However, there is mechanical slop between the camera and the pod. If the camera system is used to correct the pod or slide axis for gripper assistance, there will be issues on many levels.
For instance, if the mechanical sag is too much, we may not be able to accurately determine (through modulation (modulus)) where the camera is relative to the pod, since our encoder angle will not represent the actual position within some tolerance which has not yet been determined and will vary per tube sheet.
Also, requesting frame captures, running image processing, and resending axis commands to the robot doubles if not triples the round trip time for robot movement commands.
I believe the encoder system and electronic feedback/control systems need to be revisited for accurately measuring and controlling the axis deflection using traditional methods before considering this machine vision solution (in regards to gripper assistance).
I do believe thought that this optical system will be a good solution for guide tube accuracy. Since it would only need to be used once per tube, and given the physical properties of the machine, will be most effective.
Thanks,
MK
sounds awesome...
...but what does it DO?
i'm aware this makes me look rather silly
...but what does it DO?
i'm aware this makes me look rather silly
It's a machine vision system to improve the accuracy of our robot which moves around on a particular tube pattern. It has legs to walk in the tubes and it has manipulator arms to position over other tubes. It will use the camera to improve the accuracy of the manipulator arms.
We've already got a machine visual system but it costs a lot of money for the customer to buy (3rd party) and is a gigantic box that has to be lugged around. We want to integrate the accuracy improvement functionality into our control software.
The 3rd party solution, with its stand alone box does provide high speed tracking. The robot can move very rapidly across the tube sheet and the existing MV is capable of tracking its absolute position. Our implementation would drop this feature, it would be an offline processing task, only after the robot has approximately reached its target. The system would then be used to fine tune the alignment. But this would eliminate the need for a super computer in a rad-box.
We've already got a machine visual system but it costs a lot of money for the customer to buy (3rd party) and is a gigantic box that has to be lugged around. We want to integrate the accuracy improvement functionality into our control software.
The 3rd party solution, with its stand alone box does provide high speed tracking. The robot can move very rapidly across the tube sheet and the existing MV is capable of tracking its absolute position. Our implementation would drop this feature, it would be an offline processing task, only after the robot has approximately reached its target. The system would then be used to fine tune the alignment. But this would eliminate the need for a super computer in a rad-box.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement