Alcor Micro(AU) controllers - Peculiarities of data recovery

Alcor Micro(AU) controllers - Peculiarities of data recovery

Alcor Micro (AU) controllers are found in all sorts of devices such as microSD cards, USB flash drives, SD cards of any package, and especially monolithic devices. They are very popular in cheap, refurbished, and fake devices as well. Recent models do not vary a lot compared to other controller brands. This article is an attempt to summarize and generalize the chip-off data recovery process from devices with such controllers, especially AU6989 and AU6998 family.

Page structures

In the first step of each data recovery procedure is necessary to determine page layout. Every controller has its unique page structure, in the case of  Alcor Micro, it presents as follows.

Data area in the majority cases with AU controllers is 1024 Bytes, we can spot as well 512 Byte data area which commonly occurs in low capacity devices.

ECC area, like in every case, can vary from 10 to 240 Bytes and its highly dependable from controller model and manufacturer settings. Usually, AUs are reserving 70, 77, 120, or 126 Bytes for ECC codes. 

Service area may have 2,3 or up to 4 Bytes. The most popular, especially for AU6989 controllers, is 2 Bytes and in the case of 2 Byte Service area, it also may be XORed. 

  1. Non - XORed Service area

  1. XORed Service area

Not a full Service area is XORedd in AU controllers, but just the first byte, like on the example below.

To assign page layout, open Dump viewer, then Structure view, and set page structures.

When page layout has been set and it corresponds to the page structures in the dump, the next step should be ECC and XOR key.

In the situation when set page layout doesn't match with the page structures, like on the example below, it means that there are bad columns in the dump and it will be necessary to remove them.

Bad Columns

We can distinguish two types of bad column patterns in Alcor Micro controllers, there are: 0xFF and XORed ones.

  1. 0xFF Bad columns

The standard type of bad column pattern is common for the majority of NAND controllers. 

  1. XORed bad columns 

This type of bad columns is very hard to notice since these columns don't have the same patterns, unlike 0xFF's, but their pattern depends on the previous byte of data. We assume that those controllers write bytes, which they already have in the buffer, onto bad column position. The picture above presents an example of such a column, both neighbor columns have the same patterns and values in hex. The first one is a user data and the second (duplicated one), is a bad column and it will be necessary to remove it.

Removal of XORed bad columns

To remove bad columns from the dump attach BadColumnRemover (BCR) element to the Physical image element and click Edit.

The detailed description of this tool is available in the video: "NEW automated Bad Column Remover. Case studies".

In the first step, in the BCR menu, it's necessary to attach a special preset with a set of rules for Alcor Micro XORed bad columns. To do that click "Select preset" and select "AlcorMicro_AUxxxx_Xored".

Now you will need to find a block with user data (noise).

The list of NAND binary patterns, in bitmap viewer, is available here:

It is important to select block with user data, because reading statistics from blocks that contain the XOR key may cause false positives results since binary statistics of them are very similar on each iteration. Therefore these blocks should be skipped.

When a proper block has been found you will need to select it with the Left Mouse Button and click "Read values", to detect the type of each column (byte). The numbers of the currently selected block and the analyzed block are located in the panel on the left side (Current block, Statistics block).

To locate the positions of XORed bad columns click Right Mouse Button on a random Byte Index and chose option "Set use for XORed columns".

Bad columns have been detected, now it's possible to add them by clicking "Add all bad columns".

In the next step, to check if all bad columns have been removed, you will need to open the BCR element and check if the set page layout corresponds to the page structures in the dump.

Page structure doesn't match, it means that bad columns are remaining in the dump. Such a situation may appear especially when there are a lot of bit errors in the dump, in these cases, it is necessary to repeat the procedure of bad column removal several times and analyze different blocks with data. 

Open Bad Column Remover, attach preset, and read values from the different blocks with data. Then "Set use for XORed columns" and "Add all bad columns". Previously, after analysis of a single block, 114 Bad columns were detected, after analysis of 3 another blocks amount is increased to 122. 

To check if all bad columns are removed page layout should be checked once again.

The page layout is matching correctly, so all bad columns have been removed successfully. 


In the majority cases controllers calculate their Error Correction Codes from XORed data, such controllers are scrambling user data at first, and then start the calculation of the ECC for the previously scrambled portion of data. In those situations, we are connecting the ECC element to the scrambled dump "Physical Image" or "BCR" element, in case of bad columns. Another situation takes place when the controller has calculated ECC from raw, non-XORed data. To correct the dump in such cases it's necessary to remove XOR key influence from the dump and then find matching ECC codewords. In Alcor Micro controllers Error Codes may be calculated both these ways and it's necessary to figure out which one is correct.

More information about Error Correction Codes are available in the article ECC in NAND flash memory and in the video Chip reading. ECC usage. Page layout. Dump reread. Read Retry

To check this, you will need to try the ECC search before and after the XOR for Data area element. Connect ECC to the source dump element and click "Find codewords".

If a matching codeword has been found, data transformation (INVERSION, XOR) should be determined in the next step.

In the situation when ECC wasn't found it's necessary to check several additional things.

First is the size of the ECC area.

If the size is supported then ECC should be tested after XOR for Data Area. Check chapter "XOR key determination - Data area".

The list of all supported ECCs for Alcor Micro controllers is available in VNR\DataBase\BCHCodewords\AlcorMicro(AU) folder.

In the case when the size is not supported, ECC should be tested manually with the "Codeword analysis" tool, before and after XOR for the Data area element. 

Description of how to use this tool is available in the article ECC in NAND flash memory, chapter "Unsupported ECC - how to make a new code".

Data transformations

Controllers of NAND memories are known from their data transformations which are applying on user data. Those transformations follow from few considerations, like speeding up of input operations (INVERSION) or increase entropy inside NAND TLC chips (XOR) to keep user data more stable. In some old devices equipped with an SLC chips user data may not be transformed at all. Each of these transformations can be observed through the Bitmap Viewer.

  1. No transformation - Not transformed (raw) user data. Visually user data contain more zeros than ones.

  1. Inversion - Binaries of user data, which was inverted, consist of the majority of ones (grey pixels).

More details about controllers and their data transformations are available in the video: Inversion. Data transformation analysis. XOR. XOR analyzer.

  1. XOR - No patterns of user data (noise) in the bitmap viewer.

In Alcor Micro cases we are using two types of XOR keys, for the Data area and for the Service area.

  1. Data area XOR - The most popular XOR key which is used by Alcor Micro controllers is the one with signature 988EE1 and it commonly appears in cases when the Service area has 2 Bytes.

  1. Service area XOR - Contain adjusted to the page size XOR key, for the first byte of the Service area. In many cases, when SA is XORed, this XOR should be connected in the last place, after XOR for the Data area and after the ECC element.

XOR key determination - Data area

To detect the XOR key for Data Area click on the element where you want to apply XOR, it should be Physical Image, ECC, or BCR in case of bad columns. Then run the XOR analyzer.

Start XOR autodetection.

After autodetection, XOR keys purposes will be displayed on the right panel, to check which one is correct, click on the "Validate key" button. In this case, 3 matching keys have been detected and now it's necessary to attach one of them.

To do that, select the XOR key and click "Apply XOR".

More details about XOR keys and their detection through XOR analyzer are available in these videos: VNR XOR analyzerInversion. Data transformation analysis. XOR. XOR analyzer and in the article XOR.

XOR key determination - Service area

To determine XOR for the Service area open the XOR analyzer, fill out key filters, like on the example below, and then select a proper XOR key according to the size of the ECC area. In our case, the ECC area has 70 Bytes, so XOR for 70 Bytes ECC should be selected.

To check if the key is working move to the Service area position and switch tab from "Source" to "Result".

If we see that the noise pattern has been removed, like on the example above, it means that the key is working and can be attached.

In the majority cases, the XOR key for the Service area will be attached after the XOR key for the Data area and ECC element, like on the example below.

Multi plane page allocation

When a proper ECC and Data transformation has been found and removed in the next step it will be necessary to determine in which mode pages of data were recorded to NAND memory. We can distinguish two types of page allocations:

  1. Single plane page allocation - sequential page allocation
  1. Multi plane page allocation - parallel page allocation among physical chips and chip planes of NAND memory.

In case when a device was using Multi plane page allocation is necessary to connect additional PAIR element and in the case of multiple chips also the UNITE element should be used. 

Description of those elements, including the procedure of determination controller page allocation mode, is described in the article: Multi Plane Page Allocation and in the video Multi-plane page allocation. PAIR and UNITE usage.

Block management

In the last step of data recovery, it is necessary to assemble blocks of data into a logical array in order to obtain a logical image. It is possible thanks to the Service area where NAND controllers usually write Logical Block Number(LBN) which helps to determine sequence of blocks in the dump.

The Block management step can be processed through the Markers table. This element allows to read particular values from each block in the dump, then it is possible to filter and stack these block together according to previously loaded values. To use it, connect the Markers table to the latest element and fill Table indexes with positions of Service area structures (LBN, Header).

In Alcor Micro cases, with 2 Bytes Service area, LBN is located on positions [1024, 1025] and it's usually written in the Little-Endian order, therefore in Markers table reversed order of these bytes should be used [1025,1024]. The first Byte of LBN[1025] will be as a Header and for Test1 we set [510,511], which will help to identify the first block.

More information about Service area structures, Endianness, and basics of Markers table is presented in the video: Block management. LBN, headers, Marker table.

When a Markers table Indexes have been filled, click on the "Create table" button. Now VNR will load values, from previously selected positions, from each block in the dump. When values have been loaded, the table can be open.

Now it will be necessary to filter out blocks that are out of the LBN sequence, in Alcor Micro's usually this sequence of blocks starts from 0x8000 and ends at 0xF0FF. To filter LBN by this sequence click on the "Block filter" button and filter LBN by the range [8000 - F0FF].

Then the Header structure should be also filtered, with the "Set" rule by: [ 80,90,A0,B0,C0,D0,E0,F0 ], like on the example below.

When blocks have been filtered, in the next step sequence of blocks should be examined. At first is necessary to sort blocks.

After that LBN step should be changed from 1/1 to 10/1, duplicated blocks should be removed and missing ones should be added. In order to locate a proper first block with the Master Boot Record is necessary to follow values from Test1 and find the MBR signature. For the file systems from the FAT family and NTFS, the signature will be always equal to 0x55AA.

Explanation of how to remove properly duplicated blocks and why is necessary to insert missing ones in Markers table is presented in the video "Block management. LBN, headers, Marker table" - 1:13:21

When duplicated blocks have been removed and the gaps have been filled it is possible to build a Logical image. Click "Create logical image" in the Markers table parameters tab and connect the Logical image element, after that run File System viewer from the Workspace panel.

    • Related Articles

    • Flash Drive Data Recovery educational webinars

      Chip-off NAND data recovery with Visual NAND reconstructor consists of several essential steps whose task is to reverse transformations which controller applied on user data. In those education webinars, you will find out how to extract raw dumps ...
    • Scrambler (XOR key) pattern visual recognition

      Nowadays flash controllers use scrambling algorithms when recording data to NAND chip. This process converts user data to noise (simple encryption). When chip-off technique of data recovery is used, it’s necessary to convert noise back to the data ...
    • Videos from our clients

      Data Rescure Labs SGdata HDD Recovery Services DataCare Labs DLInformatica More videos...
    • Binary patterns in NAND flash memory

      Analysis and recognition of binary patterns in NAND flash memory is the key step in chip-off data recovery and digital forensic analysis of broken flash devices. This analysis is carried out in the Bitmap mode since the classic HEX view does not ...
    • Phison dynamic XOR

      Majority of NAND memory devices that use scrambling algorithms generate their XOR keys statically. When a user writes new data to the NAND chip, the controller transforms this data with the XOR key that is generated every time with the same binary. ...