Frequently Asked Questions

Will ATCOM work with my data?

Speckle was designed to correct atmospherically degraded data, and has a track record of success in this area. Speckle is not de-hazing, stabilization, contrast enhancement, de-noising or super resolution. However, we have observed those desirable traits as side effects of running speckle. Using speckle solely for its “side-effects” is feasible, but expensive in terms of hardware and power consumption. The best way to verify if a dataset can be enhanced by ATCOM is to contact EM Photonics and ask for a demonstration. We will gladly process your sample data and return the results.

What are the minimum optical and camera requirements?

ATCOM does not have minimum camera and telescope requirements; however we have observed the best results in high-framerate (~200 FPS) high-depth (12-bit) data. The more raw information available to the algorithm, the better it performs. The recommendations above can be relaxed depending on the amount of movement in the scenes of interest.

How do I integrate ATCOM with my own systems?

Hire EMP. Explain

How do you quantify the performance of ATCOM?

When considering atmospheric turbulence mitigation in system development, an educated buyer should concentrate on several areas: 1) quality of enhancement and 2) technology readiness 3) computational cost 4) acquisition cost

The best way to assess 1) is to look at screenshots and inquire the supplier to process datasets captured under the intended usage scenario. Based on years of experience, we can confidently claim that speckle offers the highest performance in terms of reconstruction enhancement, and ability to mitigate strong turbulence levels. Our solution is beyond the “research lab” stage, and has been implemented and tested in rugged portable devices, in real-world scenarios.

The computational cost of speckle is high compared with alternative approaches, and this in turn drives hardware cost in order to make it usable with high-framerate videos.

How does ATCOM deal with movement in the scene?

Multi-frame algorithms (such as speckle) are prone to significant artifacts and information loss when parts of the scene are moving. The current solution is to focus on scenes without fast moving objects and use high framerate cameras. This combination produces acceptable results when imaging moving persons or vehicles. We have developed software that processes movers and background independently and stitches them, but that functionality is still experimental and has not been migrated to GPU and FPGA versions.

Is ATCOM real-time? What is the processing latency?

The qualifier ‘real-time’ has different meanings in different application areas. For some, real-time means the ability to see interactive video after a brief delay (at a possibly reduced framerate). For others, real-time means having a compute rate that is at least as fast as the rate of incoming data. And yet for others, it means that processing can keep up with incoming rate with no appreciable delay (which is application dependant). Depending on the number of pixels/second required by your application and the chosen hardware platform (CPU/GPU/FPGA) speckle may or may not achieve real-time under a given definition.

The latency, or processing delay, is closely related to the framerate. Speckle processes a block of Nave frames (a user configurable parameter typically ranging between 3 and 100). The algorithm produces one output frame per each input frame. If latency were to be defined as the time it takes to propagate information from input to output, it would technically be one frame. In more practical terms, the new data will be very faint initially and progressively become easier to see until it achieves full brightness by frame Nave. This effect is very noticeable in slow videos (10 FPS) and practically invisible to the eye in fast videos (if the computation can keep up with it)

How does ATCOM compare to adaptive optics? Can I use a wavefront sensor?

The speckle algorithm, the basis of ATCOM, is a pure image-processing solution. No further hardware modifications are required to existing optical systems in order to use it. It can work in tandem with adaptive optics technology; In fact, this is commonly done in astronomical applications.

Is ATCOM Super Resolution?

ATCOM does not go beyond to the diffraction limit. Instead, it tries to recover images to their diffraction-limited quality (maximum theoretical resolving power of the instrument). Depending on which definition of super resolution you subscribe to, speckle may or may not fall under its umbrella [reference driggers,krapels].

Does ATCOM use ‘Lucky Imaging’ Techniques?

The speckle algorithm does not use lucky techniques. These can be added, and we expect good results from doing so. At this time, we have performed manual lucky imaging for very challenging datasets, a labor intensive process. One of the reasons we have not pursued automated lucky imaging algorithms, is the fact that they rely on the capacity of a computer to distinguish a “good” frame from a “bad” one. This is still an area of open research and therefore we have been apprehensive of using automated lucky techniques on uncontrolled datasets.