In an image processing embedded system, debugging of an image generation tool camera becomes more troublesome due to processing analysis involving machine vision.
一ã€Introduction to machine vision
Machine vision is the use of machines instead of human eyes to sense the external environment and make measurements and judgments. Through the imaging device (that is, the image pickup device, divided into CMOS and CCD), the captured target is converted into an image signal, transmitted to a dedicated image processing system, and converted into a digital signal according to pixel distribution, brightness, color, and the like; the image system Various operations are performed on these signals to extract the characteristics of the target, and the on-site equipment actions are controlled based on the results of the discrimination. In some systems with high demands on the system's real-time actions, the human reaction speed and information processing capability cannot meet the requirements. Machine vision is easy to implement information integration and combined with the computer control system can improve the degree of automation of the system.
Second, the camera <br> <br> debugging purposes in embedded systems is debugging purposes camera mechanical and electrical parameters of the camera to produce the highest quality image data meet the system requirements. In an imaging system involving hardware and software, the quality of the image is often affected by many factors from outside interference and self-limiting. These effects can cause noise and imaging unevenness. The factors from the software level are often the problems of the algorithm. The problems at this level can be solved by the mathematical analysis of the theoretical analysis. The factors from the hardware level must be debugged by the instrument and can only be solved through experimental measurement and analysis. Because of the underlying hardware processing system, Therefore, the quality of the hardware will directly affect the quality of the software, which will affect the final image quality. To debug the camera is to eliminate interference from the hardware level as much as possible.
Third, debugging method <br> <br> camera due to the embedded system is a broad concept, so the camera car group HCS12 paper as the main chip debugging method of debugging an example to be introduced.
(1) Externally installed circuit connection The CRT monitor draws three power leads, ground, and signal leads from the analog camera to supply power to the camera, and then connects the video signal cable to the TV box video input interface. The VGA-OUT of the TV box is connected to the CRT display so that the CRT visually displays the digitized camera.
This method is a complete hardware level display, provides the same display effect as human eye vision, and has significant help for camera installation and physical parameter correction.
(II) Off-chip extended LCD LCD The Serial Peripheral Interfa (SPI) is included in the HCS12 series MCUs.
Ce), data transmission between MCUs can be realized and faster than through serial asynchronous communication (SCI). The SPI module also supports bi-directional, synchronous, and serial communication between the MCU and peripheral devices, enabling peripheral expansion of the MCU.
The Nokia 3310 LCD on the market is cheap, imaging is based on binary dot matrix, and the display module is 48*84 dots. The display of relevant information shows that the data is written to the corresponding point to make it appear different colors.
1. The display character prompts system-related operation parameters in the form of characters when the system is running. Each character takes up 8*6 of the dot column and requires 6 bytes of data. To complete the character display, it is only necessary to write the corresponding data in the designated bit for programming. Since the liquid crystal module itself does not have a character library, the liquid crystal display lattice data of characters of the ASCII table is defined at the beginning of the program, ie, a two-dimensional array of size N*6 bytes.
2. Display Picture The video signal collected by the analog camera is digitized by the A/D of the MCU. The information is stored in a 40*70 two-dimensional array. The array is binarized and it can be displayed in a 48*84 resolution LCD module. , allowing developers to observe camera vision in real time.
This method is a combination of hardware and software display methods, real-time tracking display camera related information, display will not interrupt the system operation process.
(C) send the picture to write data from serial communication PC software <br> of SCI module using the MCU to PC, PC to read data communication using the MSCOMM control program. After reading the data, you can use the windows program's powerful data processing capabilities and image display capabilities to process the image data, such as: redraw the image based on the data, analyze and filter the array filter, export the received array as a file. Provides data sources for computer simulations.
This method is a complete software display method. Only when data is received from the MCU, a series of processes can be implemented on the PC. The advantages of the test pattern transformation, the merits of the filter analysis, and the data simulation ideas all have advantages that cannot be matched by other methods. .
Fourth, the advantages and disadvantages of the three methods are compared . CRT regulation. By accessing the camera video signal, the CRT can display machine vision with high fidelity. But it can only be limited to the camera parameter test and camera mechanical position adjustment, and there is nothing to do with advanced digital signals.
2. LCD debugging method. Directly connected to the SPI port of the microcontroller for data transmission, real-time refresh display pictures, this module can be directly carried on the system, real-time display system-related information. However, due to the limitation of the resolution of the module, only black and white binary values ​​can be displayed, resulting in distortion of digital pictures.
3. Serial debugging method. Can make full use of the PC's powerful data processing and picture display function, can achieve high-precision pixel display of digital pictures, and export the gray value table, provide field data for VC, MATLAB simulation. However, the data transmission speed between the PC and the MCU is too slow, lacks real-time performance, and lacks the advantages of dynamic tracking.
V. Conclusion <br> <br> three kinds of camera debugging methods have advantages and disadvantages, but the combination can achieve mutually complementary functions at different stages of system design. Machine vision belongs to a relatively new field in today's industrial automation field. In the course of development, it will inevitably emerge a variety of excellent debugging methods. This article only combines the experience of the author to propose three methods with low cost and high feasibility. Hopefully, The majority of embedded developers will help.
一ã€Introduction to machine vision
Machine vision is the use of machines instead of human eyes to sense the external environment and make measurements and judgments. Through the imaging device (that is, the image pickup device, divided into CMOS and CCD), the captured target is converted into an image signal, transmitted to a dedicated image processing system, and converted into a digital signal according to pixel distribution, brightness, color, and the like; the image system Various operations are performed on these signals to extract the characteristics of the target, and the on-site equipment actions are controlled based on the results of the discrimination. In some systems with high demands on the system's real-time actions, the human reaction speed and information processing capability cannot meet the requirements. Machine vision is easy to implement information integration and combined with the computer control system can improve the degree of automation of the system.
Second, the camera <br> <br> debugging purposes in embedded systems is debugging purposes camera mechanical and electrical parameters of the camera to produce the highest quality image data meet the system requirements. In an imaging system involving hardware and software, the quality of the image is often affected by many factors from outside interference and self-limiting. These effects can cause noise and imaging unevenness. The factors from the software level are often the problems of the algorithm. The problems at this level can be solved by the mathematical analysis of the theoretical analysis. The factors from the hardware level must be debugged by the instrument and can only be solved through experimental measurement and analysis. Because of the underlying hardware processing system, Therefore, the quality of the hardware will directly affect the quality of the software, which will affect the final image quality. To debug the camera is to eliminate interference from the hardware level as much as possible.
Third, debugging method <br> <br> camera due to the embedded system is a broad concept, so the camera car group HCS12 paper as the main chip debugging method of debugging an example to be introduced.
(1) Externally installed circuit connection The CRT monitor draws three power leads, ground, and signal leads from the analog camera to supply power to the camera, and then connects the video signal cable to the TV box video input interface. The VGA-OUT of the TV box is connected to the CRT display so that the CRT visually displays the digitized camera.
This method is a complete hardware level display, provides the same display effect as human eye vision, and has significant help for camera installation and physical parameter correction.
(II) Off-chip extended LCD LCD The Serial Peripheral Interfa (SPI) is included in the HCS12 series MCUs.
Ce), data transmission between MCUs can be realized and faster than through serial asynchronous communication (SCI). The SPI module also supports bi-directional, synchronous, and serial communication between the MCU and peripheral devices, enabling peripheral expansion of the MCU.
The Nokia 3310 LCD on the market is cheap, imaging is based on binary dot matrix, and the display module is 48*84 dots. The display of relevant information shows that the data is written to the corresponding point to make it appear different colors.
1. The display character prompts system-related operation parameters in the form of characters when the system is running. Each character takes up 8*6 of the dot column and requires 6 bytes of data. To complete the character display, it is only necessary to write the corresponding data in the designated bit for programming. Since the liquid crystal module itself does not have a character library, the liquid crystal display lattice data of characters of the ASCII table is defined at the beginning of the program, ie, a two-dimensional array of size N*6 bytes.
2. Display Picture The video signal collected by the analog camera is digitized by the A/D of the MCU. The information is stored in a 40*70 two-dimensional array. The array is binarized and it can be displayed in a 48*84 resolution LCD module. , allowing developers to observe camera vision in real time.
This method is a combination of hardware and software display methods, real-time tracking display camera related information, display will not interrupt the system operation process.
(C) send the picture to write data from serial communication PC software <br> of SCI module using the MCU to PC, PC to read data communication using the MSCOMM control program. After reading the data, you can use the windows program's powerful data processing capabilities and image display capabilities to process the image data, such as: redraw the image based on the data, analyze and filter the array filter, export the received array as a file. Provides data sources for computer simulations.
This method is a complete software display method. Only when data is received from the MCU, a series of processes can be implemented on the PC. The advantages of the test pattern transformation, the merits of the filter analysis, and the data simulation ideas all have advantages that cannot be matched by other methods. .
Fourth, the advantages and disadvantages of the three methods are compared . CRT regulation. By accessing the camera video signal, the CRT can display machine vision with high fidelity. But it can only be limited to the camera parameter test and camera mechanical position adjustment, and there is nothing to do with advanced digital signals.
2. LCD debugging method. Directly connected to the SPI port of the microcontroller for data transmission, real-time refresh display pictures, this module can be directly carried on the system, real-time display system-related information. However, due to the limitation of the resolution of the module, only black and white binary values ​​can be displayed, resulting in distortion of digital pictures.
3. Serial debugging method. Can make full use of the PC's powerful data processing and picture display function, can achieve high-precision pixel display of digital pictures, and export the gray value table, provide field data for VC, MATLAB simulation. However, the data transmission speed between the PC and the MCU is too slow, lacks real-time performance, and lacks the advantages of dynamic tracking.
V. Conclusion <br> <br> three kinds of camera debugging methods have advantages and disadvantages, but the combination can achieve mutually complementary functions at different stages of system design. Machine vision belongs to a relatively new field in today's industrial automation field. In the course of development, it will inevitably emerge a variety of excellent debugging methods. This article only combines the experience of the author to propose three methods with low cost and high feasibility. Hopefully, The majority of embedded developers will help.