Skip to content

Need help with edge cases  #33

@neo2043

Description

@neo2043

im working on a project and im using your repo as a template. while going through your code i saw somethings i didn't understand.

File/Function Question cases encountered Would/could be case Answer
Sources/NTFS/ntfs_mft_record.cpp in MFTRecord::datasize() If $DATA is in $ATTRIBUTE_LIST the functions returns prematurely returning datasize of only the first $DATA attribute it encounters, if a file is too fragmented and has multiple $DATA attributes in $ATTRIBUTE_LIST, all the other data attributes are ignored I have encountered files with multiple $DATA attributes pointing to external mft records but the files were downloaded so an alternate data stream was also created named Zone.Identifier and ntfstool returns the datasize of this stream returns datasize of Zone.Identifier datastream instead of unnamed $DATA stream Ok, so after rigorous trial and error i found that if $DATA attribute is in $ATTRIBUTE_LIST the total length of the file is stored in the first $DATA attribute regardles of how fragmented the stored file is. in the case specified about Zone.Identifier the data stream is different than the main unnamed one, so the datasize returned is that of Zone.Identifier and not the real file's datasize.
Line 62 in extract_file() in Sources/Commands/command_extract.cpp and MFTRecord::_process_data_raw() in Sources/NTFS/ntfs_mft_record.cpp in _process_data_raw(), in first of two cases, the $DATA is non resident and compressed and if the function encounters a sparse datarun, the buffer is still created, zeroed out and yielded ignoring skip_parse parameter, in second case the $DATA is non resident and uncompressed, depending on skip_parse this step might be ignored and not yielded. Why is one case skip_parse sensitive but not the other? In command_extract.cpp data_to_file() is called with skip_parse true and it ignores the sparse blocks and write every yielded block linearly. Wouldn't ignoring these blocks and writing all the blocks contiguously create corrupt files? wouldn't sparse aware data_to_file(), not literally writing the sparse blocks to the file but setting the file pointer at an offset and writing data, showing a empty hole of sorts, aka, a sparse block no cases encountered here, i created files with /dev/urandom and /dev/zero in wsl and checked with ntfstool but no sparse blocks were encountered corrupt extraction/undeletion of files not yet

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions