IMAQ TM IMAQ Vision for LabWindows /CVI User Manual TM TM IMAQ Vision for LabWindows/CVI User Manual August 2004 Edition Part Number 371266A-01
Support Worldwide Technical Support and Product Information ni.
Important Information Warranty The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives notice of such defects during the warranty period.
Contents About This Manual Conventions ...................................................................................................................ix Related Documentation..................................................................................................x IMAQ Vision...................................................................................................x NI Vision Assistant..........................................................................................
Contents Chapter 3 Making Grayscale and Color Measurements Define Regions of Interest............................................................................................. 3-1 Defining Regions Interactively ....................................................................... 3-1 Tools Palette Transformation ........................................................... 3-5 Defining Regions Programmatically...............................................................
Contents Defining a Search Area .....................................................................5-16 Setting Matching Parameters and Tolerances...................................5-17 Testing the Search Algorithm on Test Images..................................5-18 Using a Ranking Method to Verify Results ......................................5-19 Finding Points Using Color Pattern Matching ................................................5-19 Defining and Creating Good Color Template Images ...........
Contents Appendix A Technical Support and Professional Services Glossary Index IMAQ Vision for LabWindows/CVI User Manual viii ni.
About This Manual The IMAQ Vision for LabWindows/CVI User Manual is intended for engineers and scientists who have knowledge of the LabWindows™/CVI™ programming environment and need to create machine vision and image processing applications using C functions. The manual guides you through tasks beginning with setting up your imaging system to taking measurements.
About This Manual Related Documentation In addition to this manual, the following documentation resources are available to help you create your vision application. IMAQ Vision • IMAQ Vision Concepts Manual—If you are new to machine vision and imaging, read this manual to understand the concepts behind IMAQ Vision. • IMAQ Vision for LabWindows/CVI Function Reference—If you need information about IMAQ Vision functions while creating your application, refer to this help file.
About This Manual Other Documentation • Your National Instruments image acquisition (IMAQ) device user manual—If you need installation instructions and device-specific information, refer to your device user manual.
Introduction to IMAQ Vision 1 This chapter describes the IMAQ Vision for LabWindows/CVI software, outlines the IMAQ Vision function organization, and lists the steps for making a machine vision application. Refer to the Vision Development Module Release Notes that came with your software for information about the system requirements and installation procedure for IMAQ Vision for LabWindows/CVI.
Chapter 1 Introduction to IMAQ Vision IMAQ Vision Function Tree The IMAQ Vision function tree (NIVision.lfp) contains separate classes corresponding to groups or types of functions. Table 1-1 lists the IMAQ Vision function types and gives a description of each type. Table 1-1. IMAQ Vision Function Types Function Type Description Image Management Functions that create space in memory for images and perform basic image manipulation.
Chapter 1 Introduction to IMAQ Vision Table 1-1. IMAQ Vision Function Types (Continued) Function Type Description Caliper Functions designed for gauging, measurement, and inspection applications. Operators Functions that perform arithmetic, logic, and comparison operations with two images or with an image and a constant value. Analytic Geometry Functions that perform basic geometric calculations on an image.
Chapter 1 Introduction to IMAQ Vision Table 1-2. IMAQ Machine Vision Function Types (Continued) Function Type Description Measure Distances Functions that measure distances between objects in an image. Measure Intensities Functions that measure light intensities in various shaped regions within an image. Select Region of Interest Functions that allow a user to select a specific region of interest in an image.
Chapter 1 Introduction to IMAQ Vision Set Up Your Imaging System Calibrate Your Imaging System Chapter 6: Calibration Create an Image Acquire or Read an Image Chapter 2: Getting Measurement-Ready Images Display an Image Attach Calibration Information Analyze an Image Improve an Image Make Measurements or Identify Objects in an Image Using 1 Grayscale or Color Measurements, and/or 2 Particle Analysis, and/or 3 Machine Vision Figure 1-1.
Chapter 1 Introduction to IMAQ Vision 2 Define Regions of Interest Chapter 4: Grayscale and Color Measurements Chapter 5: Particle Analysis Measure Grayscale Statistics Measure Color Statistics 3 4 Create a Binary Image Locate Objects to Inspect Improve a Binary Image Set Search Areas Find Measurement Points Identify Parts Under Inspection Make Particle Measurements Classify Objects Read Read Characters Symbologies Chapter 6: Machine Vision Convert Pixel Coordinates to Real-World Coordinat
Getting Measurement-Ready Images 2 This chapter describes how to set up your imaging system, acquire and display an image, analyze the image, and prepare the image for additional processing. Set Up Your Imaging System Before you acquire, analyze, and process images, you must set up your imaging system. How you set up your system depends on your imaging environment and the type of analysis and processing you need to do.
Chapter 2 Getting Measurement-Ready Images 3. Select an IMAQ device that meets your needs. National Instruments offers several IMAQ devices, including analog color and monochrome devices as well as digital devices. Visit ni.com/imaq for more information about IMAQ devices. 4. Configure the driver software for your image acquisition device. If you have a National Instruments image acquisition device, configure the NI-IMAQ driver software through MAX.
Chapter 2 Getting Measurement-Ready Images Table 2-1.
Chapter 2 Getting Measurement-Ready Images Source and Destination Images Some IMAQ Vision functions that modify the contents of an image have source image and destination image input parameters. The source image receives the image to process. The destination image receives the processing results. The destination image can receive either another image or the original, depending on your goals. If you do not want the contents of the original image to change, use separate source and destination images.
Chapter 2 • Getting Measurement-Ready Images imaqAdd(myImageA, myImageA, myImageB); This function adds two source images and stores the result in the first source image. • imaqAdd(myImageB, myImageA, myImageB); This function adds two source images and stores the result in the second source image. Most operations between two images require that the images have the same type and size. However, some arithmetic operations can work between two different types of images, such as 8-bit and 16-bit images.
Chapter 2 Getting Measurement-Ready Images Acquiring an Image Use one of the following methods to acquire images with a National Instruments IMAQ device. • Acquire a single image using imaqEasyAcquire(). When you call this function, it initializes the IMAQ device and acquires the next incoming video frame. Use this function for low-speed single capture applications where ease of programming is essential. • Acquire a single image using imaqSnap().
Chapter 2 Getting Measurement-Ready Images Use imaqReadVisionFile() to open an image file containing additional information, such as calibration information, template information for pattern matching, or overlay information. For more information about pattern matching templates and overlays, refer to Chapter 5, Performing Machine Vision Tasks.
Chapter 2 Getting Measurement-Ready Images Attach Calibration Information If you want to attach the calibration information of the current setup to each image you acquire, use imaqCopyCalibrationInfo(). This function takes in a source image containing the calibration information and a destination image that you want to calibrate. The output image is your inspection image with the calibration information attached to it. For detailed information about calibration, refer to Chapter 6, Calibrating Images.
Chapter 2 Getting Measurement-Ready Images If the image quality meets your needs, use the histogram to determine the range of pixel values that correspond to objects in the image. You can use this range in processing functions, such as determining a threshold range during particle analysis. If the image quality does not meet your needs, try to improve the imaging conditions to get the necessary image quality.
Chapter 2 Getting Measurement-Ready Images Lookup Tables Apply lookup table (LUT) transformations to highlight image details in areas containing significant information at the expense of other areas. A LUT transformation converts input grayscale values in the source image into other grayscale values in the transformed image. IMAQ Vision provides four functions that directly or indirectly apply lookup tables to images.
Chapter 2 Getting Measurement-Ready Images Convolution Filter The imaqConvolve() function allows you to use a predefined set of lowpass and highpass filters. Each filter is defined by a kernel of coefficients. Use imaqGetKernel() to retrieve predefined kernels. If the predefined kernels do not meet your needs, define your own custom filter using a 2D array of floating point numbers.
Chapter 2 Getting Measurement-Ready Images • Closing—Removes dark pixels isolated in bright regions and smooths boundaries. • Proper-opening—Removes bright pixels isolated in dark regions and smooths the inner contours of particles. • Proper-closing—Removes dark pixels isolated in bright regions and smooths the inner contours of particles. • Auto-median—Generates simpler particles that have fewer details. FFT Use the Fast Fourier Transform (FFT) to convert an image into its frequency domain.
Chapter 2 Getting Measurement-Ready Images attenuation increases. This operation preserves all of the zero frequency information. Zero frequency information corresponds to the DC component of the image or the average intensity of the image in the spatial domain. 3. • Highpass attenuation—The amount of attenuation is inversely proportional to the frequency information. At high frequencies, there is little attenuation. As the frequencies decrease, the attenuation increases.
Making Grayscale and Color Measurements 3 This chapter describes how to take measurements from grayscale and color images. You can make inspection decisions based on image statistics, such as the mean intensity level in a region. Based on the image statistics, you can perform many machine vision inspection tasks on grayscale or color images, such as detecting the presence or absence of components, detecting flaws in parts, and comparing a color component with a reference.
Chapter 3 Making Grayscale and Color Measurements Table 3-1 describes each of the tools and the manner in which you use them. Table 3-1. Tools Palette Functions Icon Tool Name Selection Tool Function Select an ROI in the image and adjust the position of its control points and contours. Action: Click the desired ROI or control points. Point Select a pixel in the image. Action: Click the desired position. Line Draw a line in the image.
Chapter 3 Making Grayscale and Color Measurements Table 3-1. Tools Palette Functions (Continued) Icon Tool Name Freehand Line Function Draw a freehand line in the image. Action: Click the initial position, drag to the desired shape, and release the mouse button to complete the shape. Freehand Region Draw a freehand region in the image. Action: Click the initial position, drag to the desired shape, and release the mouse button to complete the shape. Zoom Zoom-in or zoom-out in an image.
Chapter 3 Making Grayscale and Color Measurements You can display the IMAQ Vision tools palette as part of an ROI constructor window or in a separate, floating window. Follow these steps to invoke an ROI constructor and define an ROI from within the ROI constructor window: 1. Use imaqConstructROI2() to display an image and the tools palette in an ROI constructor window, as shown in Figure 3-2. Figure 3-2. ROI Constructor 2. Select an ROI tool from the tools palette. 3. Draw an ROI on your image.
Chapter 3 3. Making Grayscale and Color Measurements Click OK to populate a structure representing the ROI. You can use this structure as an input to a variety of functions, such as the following functions that measure grayscale intensity.
Chapter 3 Making Grayscale and Color Measurements The following list describes how you can display the tools palette in a separate window and manipulate the palette. • Use imaqShowToolWindow() to display the tools palette in a floating window. • Use imaqSetupToolWindow() to configure the appearance of the tools palette. • Use imaqMoveToolWindow() to move the tools palette. • Use imaqCloseToolWindow() to close the tools palette.
Chapter 3 Making Grayscale and Color Measurements or a labeled version of the binary image as a mask image to the intensity measurement function. If you want to make color comparisons, convert the binary image into an ROI descriptor using imaqMaskToROI(). Measure Grayscale Statistics You can measure grayscale statistics in images using light meters or quantitative analysis functions. You can obtain the center of energy for an image with the centroid function.
Chapter 3 Making Grayscale and Color Measurements Red Green Blue Hue or Color Image Saturation Intensity 32 Hue or Saturation Luminance Hue or Saturation Value 8 8 8 8 8 8 8 8 8 8 8 Red Green Blue Hue Saturation 8 8 8 8 8 8 8 8 8 8 8 8 Color Image Intensity 8-bit Image Processing 8 or 32 Hue Saturation or Luminance Hue Saturation or Value Figure 3-4.
Chapter 3 Making Grayscale and Color Measurements Comparing Colors You can use the color matching capability of IMAQ Vision to compare or evaluate the color content of an image or regions in an image. Complete the following steps to compare colors using color matching: 1. Select an image containing the color information that you want to use as a reference. The color information can consist of a single color or multiple dissimilar colors, such as red and blue. 2.
Chapter 3 Making Grayscale and Color Measurements a. b. Figure 3-6. Template Color Information The following sections explain when to learn the color information associated with an entire image, a region in an image, or multiple regions in an image. Using the Entire Image You can use an entire image to learn the color spectrum that represents the entire color distribution of the image.
Chapter 3 Making Grayscale and Color Measurements Figure 3-8. Using a Single Region to Learn Color Distribution Using Multiple Regions in the Image The interaction of light with the object surface creates the observed color of that object. The color of a surface depends on the directions of illumination and the direction from which the surface is observed. Two identical objects may have different appearances because of a difference in positioning or a change in the lighting conditions.
Chapter 3 Making Grayscale and Color Measurements 1 1 Regions Used to Learn Color Information Figure 3-9. Using Multiple Regions to Learn Color Distribution Choosing a Color Representation Sensitivity When you learn a color, you need to specify the sensitivity required to specify the color information.
Chapter 3 Making Grayscale and Color Measurements Ignoring Learned Colors Ignore certain color components in color matching by replacing the corresponding component in the input color spectrum array to –1. For example, by replacing the last component in the color spectrum with –1, the white color is ignored during the color matching process. By replacing the second to last component in the color spectrum, the black color is ignored during the color matching process.
Performing Particle Analysis 4 This chapter describes how to perform particle analysis on your images. Use particle analysis to find statistical information about particles—such as the area, location, and presence of particles. With this information, you can perform many machine vision inspection tasks, such as detecting flaws on silicon wafers or detecting soldering defects on electronic boards.
Chapter 4 Performing Particle Analysis If all the objects in your grayscale image are either brighter or darker than your background, you can use imaqAutoThreshold() to automatically determine the optimal threshold range and threshold your image. Automatic thresholding techniques offer more flexibility than simple thresholds based on fixed ranges.
Chapter 4 Performing Particle Analysis Removing Unwanted Particles Use imaqRejectBorder() to remove particles that touch the border of the image. Reject particles on the border of the image when you suspect that the information about those particles is incomplete. Use imaqSizeFilter() to remove large or small particles that do not interest you. You can also use the IMAQ_ERODE, IMAQ_OPEN, and IMAQ_POPEN methods in imaqMorphology() to remove small particles.
Chapter 4 Performing Particle Analysis isthmuses while close widens the isthmuses. Close and proper-close fill small holes in the particle. Auto-median removes isthmuses and fills holes. Refer to Chapter 9, Binary Morphology, of the IMAQ Vision Concepts Manual for more information about these methods. Make Particle Measurements After you create a binary image and improve it, you can make particle measurements. IMAQ Vision can return the measurements in uncalibrated pixels or calibrated real-world units.
Chapter 4 Performing Particle Analysis Table 4-1.
Chapter 4 Performing Particle Analysis Table 4-1.
Chapter 4 Performing Particle Analysis Table 4-1.
Chapter 4 Performing Particle Analysis Table 4-1.
Chapter 4 Performing Particle Analysis Table 4-1.
Chapter 4 Performing Particle Analysis Table 4-1.
Performing Machine Vision Tasks 5 This chapter describes how to perform many common machine vision inspection tasks. The most common inspection tasks are detecting the presence or absence of parts in an image and measuring the dimensions of parts to see if they meet specifications. Measurements are based on characteristic features of the object represented in the image.
Chapter 5 Performing Machine Vision Tasks Figure 5-1 illustrates the basic steps involved in performing machine vision inspection tasks. Locate Objects to Inspect Set Search Areas Find Measurement Points Identify Parts Under Inspection Classify Objects Read Read Characters Symbologies Convert Pixel Coordinates to Real-World Coordinates Make Measurements Display Results Figure 5-1. Steps to Performing Machine Vision Note Diagram items enclosed with dashed lines are optional steps.
Chapter 5 Performing Machine Vision Tasks it appears shifted and rotated in the image you need to process. This coordinate system is referred to as the measurement coordinate system. The measurement methods automatically move the ROIs to the correct position using the position of the measurement coordinate system with respect to the reference coordinate system. Refer to Chapter 13, Dimensional Measurements, of the IMAQ Vision Concepts Manual for information about coordinate systems.
Chapter 5 Performing Machine Vision Tasks Using Edge Detection to Build a Coordinate Transform You can build a coordinate transform using two edge detection techniques. Use imaqFindTransformRect() to define a coordinate system using one rectangular region. Use imaqFindTransformRects() to define a coordinate system using two independent rectangular regions. Follow these steps to build a coordinate transform using edge detection.
Chapter 5 b. Performing Machine Vision Tasks If you use imaqFindTransformRects(), specify two rectangular objects, each containing one separate, straight boundary of the object, as shown in Figure 5-3. The boundaries cannot be parallel. The regions must be large enough to include the boundaries in all the images you want to inspect. 4 2 4 2 3 3 1 1 a. 1 2 Primary Search Area Secondary Search Area b. 3 4 Origin of the Coordinate System Measurement Area Figure 5-3.
Chapter 5 Performing Machine Vision Tasks Using Pattern Matching to Build a Coordinate Transform You can build a coordinate transform using pattern matching. Use imaqFindTransformPattern() to define a coordinate system based on the location of a reference feature. Use this technique when the object under inspection does not have straight, distinct edges. Complete the following steps to build a coordinate reference system using pattern matching.
Chapter 5 Performing Machine Vision Tasks Choosing a Method to Build the Coordinate Transform Figure 5-4 guides you through choosing the best method for building a coordinate transform for your application. Start Object positioning accuracy better than ±65 degrees. No Yes The object under inspection has a straight, distinct edge (main axis). No Yes The object contains a second distinct edge not parallel to the main axis in the same search area.
Chapter 5 Performing Machine Vision Tasks Set Search Areas You use ROIs to define search areas in your images and limit the areas in which you perform your processing and inspection. You can define ROIs interactively or programmatically. Defining Regions Interactively Complete the following steps to interactively define an ROI: 1. Use imaqConstructROI2() to display an image and the tools palette in a window. 2. Select an ROI tool from the tools palette. 3. Draw an ROI on your image.
Chapter 5 Performing Machine Vision Tasks Defining Regions Programmatically When you have an automated application, you need to define ROIs programmatically. You can programmatically define regions in two ways: • Specify the contours of the ROI. • Specify individual structures by providing basic parameters that describe the region you want to define. You can specify a rotated rectangle by providing the coordinates of the center, the width, the height, and the rotation angle.
Chapter 5 Performing Machine Vision Tasks Finding Lines or Circles If you want to find points along the edge of an object and find a line describing the edge, use imaqFindEdge() and imaqFindConcentricEdges(). The imaqFindEdge() function finds edges based on rectangular search areas, as shown in Figure 5-5. The imaqFindConcentricEdge() function finds edges based on annular search areas. 4 3 1 2 1 2 Search Region Search Lines 3 4 Detected Edge Points Line Fit to Edge Points Figure 5-5.
Chapter 5 Performing Machine Vision Tasks If you want to find points along a circular edge and find the circle that best fits the edge, as shown in Figure 5-6, use imaqFindCircularEdge(). 1 4 3 2 1 2 Annular Search Region Search Lines 3 4 Detected Edge Points Circle Fit To Edge Points Figure 5-6. Finding a Circular Feature Use imaqFindEdge() and imaqFindConcentricEdge() to locate the intersection points between a set of search lines within the search region and the edge of an object.
Chapter 5 Performing Machine Vision Tasks These functions require you to input the coordinates of the points along the search contour. Use imaqROIProfile() to obtain the coordinates along the edge of each contour in an ROI. If you have a straight line, use imaqGetPointsOnLine() to obtain the points along the line instead of using an ROI. These functions determine the edge points based on their contrast and slope. You can specify whether you want to find the edge points using subpixel accuracy.
Chapter 5 Performing Machine Vision Tasks Finding Points Using Pattern Matching The pattern matching algorithms in IMAQ Vision measure the similarity between an idealized representation of a feature, called a template, and the feature that may be present in an image. A feature is defined as a specific pattern of pixels in an image. Pattern matching returns the location of the center of the template and the template orientation.
Chapter 5 Performing Machine Vision Tasks Symmetry A rotationally symmetric template, shown in Figure 5-7a, is less sensitive to changes in rotation than one that is rotationally asymmetric, shown in Figure 5-7b. A rotationally symmetric template provides good positioning information but no orientation information. a. b. Figure 5-7.
Chapter 5 Performing Machine Vision Tasks Positional Information A template with strong edges in both the x and y directions is easier to locate. Figure 5-9a shows good positional information in both the x and y directions, while Figure 5-9b shows insufficient positional information in the y direction. a. b. Figure 5-9. Positional Information Background Information Unique background information in a template improves search performance and accuracy.
Chapter 5 Performing Machine Vision Tasks the template that are necessary for shift-invariant matching. However, if you want to match the template at any orientation, use rotation-invariant matching. Use the learningMode parameter of imaqLearnPattern2() to specify which type of learning mode to use. The learning process is usually time intensive because the algorithm attempts to find the optimum features of the template for the particular matching process.
Chapter 5 Performing Machine Vision Tasks a. b. c. d. Figure 5-11. Selecting a Search Area for Grayscale Pattern Matching Setting Matching Parameters and Tolerances Every pattern matching algorithm makes assumptions about the images and pattern matching parameters used in machine vision applications. These assumptions work for a high percentage of the applications. However, there may be applications in which the assumptions used in the algorithm are not optimal.
Chapter 5 Performing Machine Vision Tasks Minimum Contrast The pattern matching algorithm ignores all image regions in which contrast values fall below a set minimum contrast value. Contrast is the difference between the smallest and largest pixel values in a region. Set the minContrast element of the imaqMatchPattern2() options parameter control to slightly below the contrast value of the search area with the lowest contrast.
Chapter 5 Performing Machine Vision Tasks Using a Ranking Method to Verify Results The manner in which you interpret the pattern matching algorithm depends on your application. For typical alignment applications, such as finding a fiducial on a wafer, the most important information is the position and location of the best match. Use the position and corner elements of the PatternMatch structure to get the position and the bounding rectangle of a match.
Chapter 5 Performing Machine Vision Tasks 5. 6. Set the tolerances and parameters to specify how the algorithm operates at run time using the options parameter of imaqMatchColorPattern(). Test the search algorithm on test images using imaqMatchColorPattern(). 7. Verify the results using a ranking method. Defining and Creating Good Color Template Images The selection of a good template image plays a critical part in obtaining accurate results with the color pattern matching algorithm.
Chapter 5 Performing Machine Vision Tasks Background Information Unique background information in a template improves search performance and accuracy during the grayscale pattern matching phase. This requirement could conflict with the color information requirement because background colors may not be desirable during the color location phase.
Chapter 5 Performing Machine Vision Tasks Defining a Search Area Two equally important factors define the success of a color pattern matching algorithm—accuracy and speed. You can define a search area to reduce ambiguity in the search process. For example, if your image has multiple instances of a pattern and only one instance is required for the inspection task, the presence of additional instances of the pattern can produce incorrect results.
Chapter 5 Performing Machine Vision Tasks The time required to locate a pattern in an image depends on both the template size and the search area. By reducing the search area or increasing the template size, you can reduce the required search time. Increasing the template size can improve the search time, but doing so reduces match accuracy if the larger template includes an excess of background information.
Chapter 5 Performing Machine Vision Tasks Choose from the following search strategies: • IMAQ_CONSERVATIVE—Uses a very small step size, the least amount of subsampling, and all the color information present in the template. The conservative strategy is the most reliable method to look for a template in any image at potentially reduced speed. Note Use the IMAQ_CONSERVATIVE strategy if you have multiple targets located very close to each other in the image.
Chapter 5 Performing Machine Vision Tasks Rotation Angle Ranges Refer to the Setting Matching Parameters and Tolerances section of this chapter for information about rotation angle ranges. Testing the Search Algorithm on Test Images To determine if your selected template or reference pattern is appropriate for your machine vision application, test the template on a few test images by using imaqMatchColorPattern().
Chapter 5 Performing Machine Vision Tasks 6. Test the color location algorithm on test images using imaqMatchColorPattern(). 7. Verify the results using a ranking method. You can save the template image using imaqWriteVisionFile(). Convert Pixel Coordinates to Real-World Coordinates The measurement points you located with edge detection and pattern matching are in pixel coordinates.
Chapter 5 Performing Machine Vision Tasks Analytic Geometry Measurements Use the following functions to make geometrical measurements from the points you detect in the image: • imaqFitLine()—Fits a line to a set of points and computes the equation of the line. • imaqFitCircle2()—Fits a circle to a set of at least three points and computes its area, perimeter, and radius.
Chapter 5 Performing Machine Vision Tasks Use imaqFindLCDSegments() to calculate the ROI around each digit in an LCD or LED. To find the area of each digit, all the segments of the indicator must be activated. Use imaqReadLCD() to read multiple digits of an LCD or LED. Identify Parts Under Inspection In addition to making measurements after you set regions of inspection, you can also identify parts using classification, optical character recognition (OCR), and barcode reading.
Chapter 5 Performing Machine Vision Tasks The following code sample provides an example of a typical classification application. ClassifierSession* session; Image* image; ROI* roi; char* fileName; // The classifier file to use. ClassifierReport* report; session = imaqReadClassifierFile(NULL, fileName, IMAQ_CLASSIFIER_READ_ALL, NULL, NULL, NULL); while (stillClassifying) { // Acquire and process an image and store it in the //image variable.
Chapter 5 Performing Machine Vision Tasks Reading Barcodes Use barcode reading functions to read values encoded into 1D barcodes, Data Matrix barcodes, and PDF417 barcodes. Reading 1D Barcodes To read a 1D barcode, locate the barcode in the image using one of the techniques described in this chapter. Then pass the ROI Descriptor of the location into imaqReadBarcode(). Use imaqReadBarcode() to read values encoded in the 1D barcode.
Chapter 5 Performing Machine Vision Tasks By default, imaqReadDataMatrixBarcode() assumes the barcode cells are square. If the barcodes you need to read have round cells, set the cellShape element of the options parameter to IMAQ_ROUND_CELLS. Specify round cells only if the Data Matrix cells are round and have clearly defined edges. If the cells in the matrix touch one another, you must set cellShape to IMAQ_SQUARE_CELLS.
Chapter 5 Performing Machine Vision Tasks Use the following functions to overlay search regions, inspection results, and other information, such as text and bitmaps. • imaqOverlayPoints()—Overlays points on an image. Specify a point by its x-coordinate and y-coordinate. • imaqOverlayLine()—Overlays a line on an image. Specify a line by its start and end points. • imaqOverlayRect()—Overlays a rectangle on an image. • imaqOverlayOval()—Overlays an oval or a circle on the image.
Chapter 5 Performing Machine Vision Tasks The following list contains the kinds of information you can overlay with the previous functions except imaqFindPattern(), imaqCountObjects(), and imaqFindTransformPattern(). • The search area input into the function • The search lines used for edge detection • The edges detected along the search lines • The result of the function With imaqFindPattern(), imaqCountObjects(), and imaqFindTransformPattern(), you can overlay the search area and the result.
6 Calibrating Images This chapter describes how to calibrate your imaging system, save calibration information, and attach calibration information to an image. After you set up your imaging system, you may want to calibrate your system. If your imaging setup is such that the camera axis is perpendicular or nearly perpendicular to the object under inspection and your lens has no distortion, use simple calibration. With simple calibration, you do not need to learn a template.
Chapter 6 Calibrating Images Refer to Chapter 5, Performing Machine Vision Tasks, for more information about applying calibration information before making measurements. Defining a Calibration Template You can define a calibration template by supplying an image of a grid or providing a list of pixel coordinates and their corresponding real-world coordinates. This section discusses the grid method in detail. A calibration template is a user-defined grid of circular dots.
Chapter 6 Calibrating Images Defining a Reference Coordinate System To express measurements in real-world units, you need to define a coordinate system in the image of the grid. Use the CoordinateSystem structure to define a coordinate system by its origin, angle, and axis direction. The origin, expressed in pixels, defines the center of your coordinate system. The angle specifies the orientation of your coordinate system with respect to the angle of the topmost row of dots in the grid image.
Chapter 6 Calibrating Images x 1 2 x y y b. a. 1 Origin of a Calibration Grid in the Real World 2 Origin of the Same Calibration Grid in an Image Figure 6-3. A Calibration Grid and an Image of the Grid Note If you specify a list of points instead of a grid for the calibration process, the software defines a default coordinate system, as follows: 1. The origin is placed at the point in the list with the lowest x-coordinate value and then the lowest y-coordinate value. 2.
Chapter 6 Calibrating Images x 1 y' x x' 2 y y 1 Default Origin in a Calibration Grid Image 2 User-Defined Origin Figure 6-4. Defining a Coordinate System Learning Calibration Information After you define a calibration grid and reference axis, acquire an image of the grid using the current imaging setup. For information about acquiring images, refer to the Acquire or Read an Image section of Chapter 2, Getting Measurement-Ready Images. The grid does not need to occupy the entire image.
Chapter 6 Calibrating Images Specifying Scaling Factors Scaling factors are the real-world distances between the dots in the calibration grid in the x and y directions and the units in which the distances are measured. Use the GridDescriptor structure to specify the scaling factors. Choosing a Region of Interest Define a learning ROI during the learning process to define a region of the calibration grid you want to learn.
Chapter 6 Calibrating Images Choose the perspective projection algorithm when your system exhibits perspective errors only. A perspective projection calibration has an accurate transformation even in areas not covered by the calibration grid, as shown in Figure 6-6. Set the mode element of the options parameter to IMAQ_PERSPECTIVE to choose the perspective calibration algorithm. Learning and applying perspective projection is less computationally intensive than the nonlinear method.
Chapter 6 Calibrating Images If the learning process returns a learning score below 600, try the following: 1. Make sure your grid complies with the guidelines listed in the Defining a Calibration Template section of this chapter. 2. Check the lighting conditions. If you have too much or too little lighting, the software may estimate the center of the dots incorrectly. Also, adjust the range parameter to distinguish the dots from the background. 3. Select another learning algorithm.
Chapter 6 Calibrating Images Simple Calibration When the axis of your camera is perpendicular to the image plane and lens distortion is negligible, use simple calibration. In simple calibration, a pixel coordinate is transformed to a real-world coordinate through scaling in the horizontal and vertical directions. Use simple calibration to map pixel coordinates to real-world coordinates directly without a calibration grid.
Chapter 6 Calibrating Images Save Calibration Information After you learn the calibration information, you can save it so that you do not have to relearn the information for subsequent processing. Use imaqWriteVisionFile() to save the image of the grid and its associated calibration information to a file. To read the file containing the calibration information use imaqReadVisionFile().
Technical Support and Professional Services A Visit the following sections of the National Instruments Web site at ni.com for technical support and professional services: • Support—Online technical support resources at ni.
Glossary Numbers 1D One-dimensional. 2D Two-dimensional. 3D Three-dimensional. A AIPD The National Instruments internal image file format used for saving complex images and calibration information associated with an image (extension APD). alignment The process by which a machine vision application determines the location, orientation, and scale of a part being inspected. alpha channel The channel used to code extra information, such as gamma correction, about a color image.
Glossary barycenter The grayscale value representing the centroid of the range of an image’s grayscale values in the image histogram. binary image An image in which the objects usually have a pixel intensity of 1 (or 255) and the background has a pixel intensity of 0. binary morphology Functions that perform morphological operations on a binary image.
Glossary C caliper (1) A function in the NI Vision Assistant and in NI Vision Builder for Automated Inspection that calculates distances, angles, circular fits, and the center of mass based on positions given by edge detection, particle analysis, centroid, and search functions. (2) A measurement function that finds edge pairs along a specified path in the image.
Glossary connectivity-4 Only pixels adjacent in the horizontal and vertical directions are considered neighbors. connectivity-8 All adjacent pixels are considered neighbors. contrast A constant multiplication factor applied to the luma and chroma components of a color pixel in the color decoding process. convex hull The smallest convex polygon that can encapsulate a particle. convex hull function Computes the convex hull of objects in a binary image. convolution See linear filter.
Glossary edge steepness The number of pixels that corresponds to the slope or transition area of an edge. energy center The center of mass of a grayscale image. See also center of mass. equalize function See histogram equalization. erosion Reduces the size of an object along its boundary and eliminates isolated points in the image. exponential and gamma corrections Expand the high gray-level information in an image while suppressing low gray-level information.
Glossary gradient filter An edge detection algorithm that extracts the contours in gray-level values. Gradient filters include the Prewitt and Sobel filters. gray level The brightness of a pixel in an image. gray-level dilation Increases the brightness of pixels in an image that are surrounded by other pixels with a higher intensity. gray-level erosion Reduces the brightness of pixels in an image that are surrounded by other pixels with a lower intensity.
Glossary hit-miss function Locates objects in the image similar to the pattern defined in the structuring element. HSI A color encoding scheme in hue, saturation, and intensity. HSL A color encoding scheme using hue, saturation, and luminance information where each image in the pixel is encoded using 32 bits: 8 bits for hue, 8 bits for saturation, 8 bits for luminance, and 8 unused bits. HSV A color encoding scheme in hue, saturation, and value. hue Represents the dominant color of a pixel.
Glossary image enhancement The process of improving the quality of an image that you acquire from a sensor in terms of signal-to-noise ratio, image contrast, edge definition, and so on. image file A file containing pixel data and additional information about the image. image format Defines how an image is stored in a file. Usually composed of a header followed by the pixel data. image mask A binary image that isolates parts of a source image for further processing.
Glossary intensity calibration Assigns user-defined quantities such as optical densities or concentrations to the gray-level values in an image. intensity profile The gray-level distribution of the pixels along an ROI in an image. intensity range Defines the range of gray-level values in an object of an image. intensity threshold Characterizes an object based on the range of gray-level values in the object.
Glossary line gauge Measures the distance between selected edges with high-precision subpixel accuracy along a line in an image. For example, this function can be used to measure distances between points and edges. This function also can step and repeat its measurements across the image. line profile Represents the gray-level distribution along a line of pixels in an image.
Glossary luminance See luma. LUT Lookup table. A table containing values used to transform the gray-level values of an image. For each gray-level value in the image, the corresponding new value is obtained from the lookup table. M M (1) Mega, the standard metric prefix for 1 million or 106, when used with units of measure such as volts and hertz. (2) Mega, the prefix for 1,048,576, or 220, when used with B to quantify data or computer memory.
Glossary N neighbor A pixel whose value affects the value of a nearby pixel when an image is processed. The neighbors of a pixel are usually defined by a kernel or a structuring element. neighborhood operations Operations on a point in an image that take into consideration the values of the pixels neighboring that point. NI-IMAQ The driver software for National Instruments IMAQ hardware. nonlinear filter Replaces each pixel value with a nonlinear function of its surrounding pixels.
Glossary offset The coordinate position in an image where you want to place the origin of another image. Setting an offset is useful when performing mask operations. opening An erosion followed by a dilation. An opening removes small objects and smooths boundaries of objects in the image. operators Allow masking, combination, and comparison of images. You can use arithmetic and logic operators in IMAQ Vision.
Glossary PNG Portable Network Graphic. An image file format for storing 8-bit, 16-bit, and color images with lossless compression. PNG images have the file extension PNG. Prewitt filter An edge detection algorithm that extracts the contours in gray-level values using a 3 × 3 filter kernel. proper-closing A finite combination of successive closing and opening operations that you can use to fill small holes and smooth the boundaries of objects.
Glossary ROI Region of interest. (1) An area of the image that is graphically selected from a window displaying the image. This area can be used focus further processing. (2) A hardware-programmable rectangular portion of the acquisition window. ROI tools A collection of tools that enable you to select a region of interest from an image. These tools let you select points, lines, annuli, polygons, rectangles, rotated rectangles, ovals, and freehand open and closed contours.
Glossary spatial filters Alter the intensity of a pixel relative to variations in intensities of its neighboring pixels. You can use these filters for edge detection, image enhancement, noise reduction, smoothing, and so forth. spatial resolution The number of pixels in an image, in terms of the number of rows and columns in the image. square function See exponential function. square root function See logarithmic function.
Glossary V value The grayscale intensity of a color pixel computed as the average of the maximum and minimum red, green, and blue values of that pixel. VI Virtual Instrument. (1) A combination of hardware and/or software elements, typically used with a PC, that has the functionality of a classic stand-alone instrument. (2) A LabVIEW software module (VI), which consists of a front panel user interface and a block diagram program.
Index Numerics C 1D barcodes, reading, 5-30 2D arrays, converting to images, 2-5, 2-7 calibrating images, 6-1 imaging systems, 2-2 calibration defining templates, 6-2 saving calibration information, 6-10 using simple calibration, 6-9 calibration information attaching, 6-10 attaching to images, 2-8 learning, 6-5 saving, 6-10 calibration templates, defining, 6-2 calibrations, invalidating, 6-8 characters, reading, 5-29 choosing, learning algorithms, 6-6 classifying, samples, 5-28 closing particles, 4-4 too
Index pattern matching tolerances, 5-17 reference coordinate systems, 6-3 regions of interest, 3-1 ROIs, 6-6 ROIs interactively, 3-1, 5-8 ROIs programmatically, 3-6, 5-9 search areas, 5-8, 5-16, 5-22 template images, 5-13 deployment, application, xi destination images, 2-4 detecting circular edges, 5-10 edges, 5-9 edges along a contour, 5-11 edges along multiple contours, 5-12 rectangular edges, 5-10 determining, 2-8 image quality, 2-8 diagnostic tools (NI resources), A-1 displaying images, 2-7 results, 5-
Index H error maps, learning, 6-8 examples (NI resources), A-1 external windows, displaying images, 2-7 extracting, planes of color images, 3-8 help, technical support, A-1 highpass attenuation, 2-13 filters, 2-10 truncation, 2-13 holes, filling in particles, 4-3 F Fast Fourier Transform, 2-12 features, finding in images, 5-13 FFT, 2-12 filtering grayscale features of an image, 2-11 images, 2-10 filters convolution, 2-11 highpass, 2-10 lowpass, 2-10 Nth order, 2-11 finding image features, 5-13 points us
Index K improving binary images, 4-2 improving sharpness of transitions, 2-10 inspecting, 2-8 learning color information, 3-9 learning the color distribution, 3-10 loading from file, 2-5 measuring light intensity, 3-7 modifying complex images, 2-13 processing components, 3-7 reading, 2-5 reading from file, 2-6 setting color sensitivity, 5-23 source, 2-4 taking color measurements, 3-1 taking grayscale measurements, 3-1 thresholding, 4-1 imaging systems calibrating, 2-2 setting up, 2-1 IMAQ Machine Vision f
Index N training the pattern matching algorithm, 5-15 using color pattern matching, 5-19 PDF417 barcodes, reading, 5-31 performing machine vision inspection tasks, 5-1 particle analysis, 4-1 pixel coordinates, converting to real-world coordinates, 5-26 points, finding using color location, 5-25 processing, components of color images, 3-7 programming examples (NI resources), A-1 National Instruments support and services, A-1 NI Vision Assistant, x NI Vision Builder for Automated Inspection, x NI-IMAQ, xi
Index templates, defining template images, 5-13 testing, search algorithm, 5-18, 5-25 thresholding, images, 4-1 tolerances defining for pattern matching, 5-17 setting for pattern matching, 5-23 tools palette closing, 3-6 configuring, 3-6 displaying, 3-6 moving, 3-6 Tools palette functions, 3-2 training color pattern matching algorithm, 5-21 pattern matching algorithm, 5-15 training and certification (NI resources), A-1 troubleshooting (NI resources), A-1 truncation highpass, 2-13 lowpass, 2-13 search algo