STDF datalog analysis – Taking “rescreen” into account

Insights
image

In any good enterprise Semiconductor Yield Management System, it is necessary to report the overall yield of a lot based on consolidating the datalogs which were used to test the lot. The “Run Type”, parsed and recorded in yieldHUB for each datalog, is used in this calculation. This Yield figure is then the standard datalog-based Yield used throughout yieldHUB for that final test lot id. The process in yieldHUB is completely automated and the user has nothing to do once the consolidation flag is switched on within the yieldHUB database for a subcon.

For this all the work properly, it is necessary to mark each datalog so a system like yieldHUB can automatically determine the Run Type. This is good data management practice and is under the control of the test development engineer.The parametric and bin information should also be consolidated in a similar manner. For example, the charts and statistics for a test should reflect one value per tested unit and this can be done through automated consolidation by yieldHUB.

The first chart here is before any consolidation is done for the test shown. The second chart is after consolidation is done and you can see that there are fewer fails for the test after consolidation:

The default automated method of consolidation within the yieldHUB processing engine is as follows:

  • Ignore all QA and Correlation Run Types

  • Treat any undefined Run Types as Raw (equivalent to “First Pass”)

  • Apply the following algorithm to Raw and Re-screen Run Types:

    • Ignore all rejects from Raw

    • Ignore all rejects from any re-screen datalog up to the last re-screen datalog.

The result of the above is that the units that are left are good units from [all Raw datalogs + all re-screen datalogs] and rejects from [the last re-screen datalog]. The yield, bin and parametric results for the overall lot are then compatible with this algorithm.

Dice with and without X/Y Co-ordinates

In consolidating lots with datalogs that have X/Y coordinates, yieldHUB will use the latest datalog per unit based on the Start_Date of the datalog files. Since X/Y co-ordinates are assumed fixed, consolidating data using this method can be said to be accurate.

In cases where X/Y co-ordinates are not available, as is the case with most Final Test processes, yieldHUB assumes that all rejects from the raw test process are re-tested and datalogged in the re-screen datalogs. So during consolidation, all reject data from raw testing datalogs are removed and replaced with the re-screen data.

Sources of Inaccuracies

The above consolidation system is prone to inaccuracies if the re-screen process is not consistently followed in the final test floor. For example, assume that the raw testing of 1000 units had 70% yield and during re-screen, only 100 units were re-screened. The consolidated datalog file will therefore only have 700 good units + 100 units from re-screen.

Another case is that re-screening was repeatedly done by putting back the rejects into the handler but the datalog file was not closed. The effect is that it would appear that the lot size will grow. In the example above, if the re-screen was done on the 300 units with yield of 50% and the rejects put back again in the handler with yield of 20%, the effect is that there would appear to be 450 units retested with yield = (150 + 60)/(300+150) = 46.67%. Then the consolidated file yield becomes: (700 + 210)/(700+450) = 910/1150 = 70.13%.

For consolidation of datalog files without X/Y co-ordinates or any method of identifying each unit (like ChipID), the accuracy of the consolidation quantities is highly dependent on the operator compliance to the re-screen procedures.

Customization

yieldHUB can customize the smart re-screen algorithm to take into account any variation in how testing is done by your company, once consistent procedures are followed. For example, your company could be re-screening only certain bins. In that case we would encode an algorithm which includes all good units are un-re-screened fails from each qualifying datalog.

The good thing is that whatever algorithm matches your test operations, it can be automated within yieldHUB. Of course that is only useful for those lots where the procedures are followed.

There will always be non-idealities in production testing, but this feature will allow you to also identify non-idealities so that the algorithms you expect are operating in production are being adhered to. For more information you can visit our list of resources on STDF