5 important tips for working with STDF

Insights
image
The STDF datalog format developed by Teradyne has become the de facto standard in datalog formats. Picture: Pexels

The STDF datalog format developed by Teradyne has become the de facto standard in datalog formats in the semiconductor industry as most modern ATE manufacturers support the format. Data processing of the STDF files often require reading the binary files and converting them to formats that are human-readable or for input into a database. It is also possible that the STDF file is converted to ATDF or an ATDF file is converted to STDF.

During such data processing, we at yieldHUB have encountered several issues that can affect the usability and accuracy of the data being presented. In this blog, we will talk about the five things to watch out when dealing with STDF files. These issues are significant and knowledge about them can prevent confusion later on when processing STDF datalogs.

1. There is loss of metadata if MPR is converted into PTR

In STDF, a Multiple-Result Parametric Record or MPR is structured in such a way that one record contains one test limit that applies for multiple results, one for each pin. The Parametric Test Record or PTR stores only one result.

We have seen a converter output that transfroms the MPR test to several PTR tests into CSV. This is a bad practice because one loses the metadata that makes the test as an MPR. So when the CSV file is converted back to STDF, the MPR has become multiple PTRs.

In one recent evaluation with a customer, we got confused why some datalog examples saved a certain test in MPR and others in PTR. It turned out that the reconverted datalogs lost the MPR structure. This, in turn, caused problems because now you have tests with repeated Test Numbers.

2. Understand the correct use of MPR vs. PTR

Two of the key records in the format are called PTR for parametric test record and MPR for multiple-result parametric record. When should you use either record for situations when the same test is done multiple times in a single part?

PTR is the record to use for tests that have one result only. However, often, a single test can be done several times on different pins. When this is the case, the test development engineer has two options on how to store the results: PTR or MPR.

An important characteristics of MPR is that it only has a single limit for all the multiple test results. This makes PTR the obvious choice if the individual tests have different limits.

Many of the modern ICs have digital pins which have very similar characteristics. So measuring tests such as the Input Leakage Current can be conveniently done in one go using the digital pin resources on the ATE. Test limits will usually be set to be the same for all pins and therefore it is ideal to use MPR.

There may be cases when it is not practical to use MPR. If that’s the case, the use of PTR is the only choice, and it would be ideal to make sure the pin name is added to the PTR test name to enable the engineer reviewing the data later to distinguish the multiple results from each other.

In yieldHUB, we have augmented the MPR test analysis by adding the capability to upload an MPR pin map which gives the user a view of the layout of the pins and the corresponding results. This enables an entirely new point of view in terms of analyzing MPR data.

3. Watch out for float values and precision when converting

In STDF, the test results, the limits and other numerical fields are stored as 4-byte float values. During conversion to ASCII, we carefully decide on the numerical format in text that we save so that we can avoid losing resolution. What we’ve found is that test data results can be Integer like but are stored as floats. For example, a device under test has a test that checks the value of the output of the digital tests and the result is an integer with absolute value in hundred millions.

Then using the same variables, the test results can be in picoAmps for example and is saved in 4 significant digits. Values like this cannot be stored exactly in the float data type. So for example, the result could be 1.4299E-12 but default print statements could output something like 1.4298921E-12. When this is converted back to a float value and stored in a float format, one may not be able to display 1.4299E-12 exactly.

What we found is that a formatting option %g can be used on printf statements in such a way as to control the output such that data intended to be output as integers are displayed as integers and float values are displayed with the reasonable number of decimal points.

The use of %g is not the magical solution, however. We still find a few situations when it can’t convert to the exact value we expected. For example, we encountered a value of 500 mv is displayed as 500.00003051758 mv. However, for the vast majority of the values, %g as the option for the printf() command gives us the expected values.

4. Review metadata fields closely as they may have incorrect values

In a recent development work with a customer, we were asked to retrieve the value of the Program from the MIR.USER_TXT field. Another customer asked us to use the value in the MIR.LOT_ID as the Sublot_Id and get the actual Lot_Id from another data source. Another case we saw is that the Pin Map Record (PMR) had repeated Pin Numbers. This caused problems with data analysis because of either the first or last occurrence

It is advisable for test development engineers to help make sure that the metadata fields are used correctly to avoid additional work during the preparation for data analysis.

5. Some test results may be invalid values

Some test results may have values that are extremely large absolute values. Statistics will be skewed so it becomes important in data analysis to properly filter out these values. On the other hand, we had customers saving zero values if the test is invalid. It is important that the fields that mark the test as valid or not are properly populated.

The 5 tips above can help you deal with multiple issues and variations of STDF files and perhaps even with non-STDF files. The more subcons are involved with the testing, the more variation can be expected. Even with companies that have quite a tight control on data capture, there are some test systems that lack capability for automation of data capture so these will depend on manual input.

If you are a test development engineer or someone given the responsibility to ensure data captured in the datalogs are reliable, it is important that steps are taken to ensure the data stored in the datalogs are sensible and usable for data analysis.