-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
special file I/O error #12
Comments
Dear vmfs-tools developers, Steven. |
Hi, Update: I've just read the blog post for the 0.2.5 release and see that the 256GB limit is a known feature. Are there any plans to support larger files? |
There's a great breakdown of what's going on here over at sourceforge: https://sourceforge.net/p/partclone/discussion/638475/thread/806d84cb/#9a50 It looks like we're missing the handling of pointer blocks (VMFS_BLK_TYPE_PB), so files > 256GB return io errors. http://cormachogan.com/2013/11/15/vsphere-5-5-storage-enhancements-part-2-vmfs-heap/ |
I think I have a fix for this. I'm doing some testing and will open a pull request: https://github.com/mlsorensen/vmfs-tools Better late than never :-) Please test if you're interested and post bugs to my repo. |
@mlsorensen That would be awesome. I spent a little while trying to implement pointer block support about a year go but didn't have the time to get it working properly. We've been using vmfs-tools at work as part of a toolkit we developed for recovering files from within VMDKs on SAN snapshots of VMFS volumes, but hit the 256GB bug/feature/limit when dealing with some of our bigger VMs. I'd be more than happy to test your modified code. |
Ok, try it out and let me know. I've so far been able to fill a 500GB vmdk on vSphere 5.5 with 300GB of files and read it with vmfs_tools with all md5s verified correctly. |
Thank you very much. -rw-r--r-- 1 root root 511M Dec 1 03:15 ./ISO/drbl-live/stable/drbl-live-xfce-2.3.1-6-amd64.iso and the all md5sum is correct. |
glandium#12 This provides read support for files > 256G, due to vSphere 5 adding double indirect block pointers. It uses a double indirect lookup if the file has a blocksize of 1M and is over the VMFS size threshold for using double indirect blocks. Perhaps there's a cleaner way of determining the use of double indirect from the inode. We may also want to implement a block pointer cache like VMware introduced with this feature, however given the use cases of this software it may not be necessary.
Well, that at least means nothing was broken. You wouldn't hit my patch until 256G size. I haven't tested writing at all. I found a bug in my code at 1T, I needed a modulus where the upper pointers are traversed. See the update in my repo. |
I've done a bit of testing this morning. I mounted a 1TB VMFS volume from a SAN snapshot, then mounted a 720GB VMDK file (stored on that volume) containing an NTFS filesystem. I was able to browse the contents and successfully open files. :-) Mounting a VMDK over 256GB failed on the 0.2.5 release but your pointer block fix seems to have done the trick! Happy days. |
glandium#12 This provides read support for files > 256G, due to vSphere 5 adding double indirect block pointers. It uses a double indirect lookup if the file has a blocksize of 1M and is over the VMFS size threshold for using double indirect blocks. Perhaps there's a cleaner way of determining the use of double indirect from the inode. We may also want to implement a block pointer cache like VMware introduced with this feature, however given the use cases of this software it may not be necessary.
The fix provided by @mlsorensen should be included in this repository because official distributions (like Ubuntu) are still using this version. By the way, I couldn't manage to compile the fix on Ubuntu, but it worked perfectly on CentOS. Big thanks for both. |
@carlosgs83 : you succeed to compile it on what centos version? can you share your binaries? |
@knackko I only used this to rescue a not working ESXi machine and I successfully done it time ago. No special versions of CentOS needed, I remember that I managed this with live DVD. |
Hallo, |
This issue was mentioned in this interesting article about recovering an encrypted file system: https://medium.com/@DCSO_CyTec/unransomware-from-zero-to-full-recovery-in-a-blink-8a47dd031df3 |
new issue about file I/O error here
I use clonezilla-live to mount vmfs with vmfs-tools and found special vmdk file can't be read. The fsck also show me some errors but the vmdk work well under esxi. I can't make sure is't a bug of vmfs-tools or not. Could you help me to fix the issue?
I am already do some work and dump the error message below:
root@debian:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 2039276 372256 1667020 19% /
...
/dev/fuse 971505664 542525440 428980224 56% /mnt
oot@debian:~# mount
/dev/fuse on /mnt type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions)
oot@debian:~# find /mnt -type f -exec md5sum ...
3b6d3b0c8c96be3395dbe138f06464a1 /mnt/linux/linux.vmdk
md5sum: /mnt/linux/linux_1-flat.vmdk: Input/output error
cp /mnt/linux/linux_1-flat.vmdk ./
cp: error reading ‘/mnt/linux/linux_1-flat.vmdk’: Input/output error
cp: failed to extend ‘./linux_1-flat.vmdk’: Input/output error
Scanning 130000 FDC entries...
vmfs_bitmap_get_entry -1vmfs_bitmap_get_entry -1vmfs_bitmap_get_entry -1Block 0x6d783f3c is used but not allocated.
Block 0x69442023 is referenced by multiple inodes:
0x01405e84 0x01c05e84
Block 0x69442023 is used but not allocated.
Data collected from inode entries:
File Blocks : 17810
Sub-Blocks : 2
Pointer Blocks : 18
Inodes : 21
./fsck.vmfs/fsck.vmfs /dev/sda3 > fsck.log
Orphaned inode 0x00000000
File Block 0x006d69c1 is lost.
File Block 0x006d6a01 is lost.
File Block 0x006d6a41 is lost.
File Block 0x006d6a81 is lost.
File Block 0x006d6ac1 is lost.
File Block 0x006d6b01 is lost.
..........multiline complain same error but different file block
File Block 0x02715fc1 is lost.
File Block 0x02716001 is lost.
Pointer Block 0x0003da43 is lost.
Pointer Block 0x1003da43 is lost.
Pointer Block 0x2003da43 is lost.
Pointer Block 0x3003da43 is lost.
..........multiline complain same error but different Pointer block
Pointer Block 0x3003e203 is lost.
Pointer Block 0x4003e203 is lost.
Unallocated blocks : 2
Lost blocks : 527389
Undefined inodes : 0
Orphaned inodes : 1
Directory errors : 0
root@debian:/home/partimag/dev/vmfs-tools# stat -f /mnt
File: "/mnt"
ID: 0 Namelen: 0 Type: fuseblk
Block size: 1048576 Fundamental block size: 1048576
Blocks: Total: 948736 Free: 418926 Available: 418926
Inodes: Total: 130000 Free: 129979
root@debian:/home/partimag/dev/vmfs-tools# stat /mnt/linux/linux_1-flat.vmdk
File: ‘/mnt/linux/linux_1-flat.vmdk’
Size: 536870912000 Blocks: 1048576000 IO Block: 4096 regular file
Device: 16h/22d Inode: 25190020 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2013-10-29 22:25:29.000000000 +0000
Modify: 2013-10-29 08:56:02.000000000 +0000
Change: 2013-10-03 23:12:58.000000000 +0000
Birth: -
The text was updated successfully, but these errors were encountered: