Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

special file I/O error #12

Open
Thomas-Tsai opened this issue Nov 7, 2013 · 14 comments
Open

special file I/O error #12

Thomas-Tsai opened this issue Nov 7, 2013 · 14 comments

Comments

@Thomas-Tsai
Copy link

new issue about file I/O error here

I use clonezilla-live to mount vmfs with vmfs-tools and found special vmdk file can't be read. The fsck also show me some errors but the vmdk work well under esxi. I can't make sure is't a bug of vmfs-tools or not. Could you help me to fix the issue?

I am already do some work and dump the error message below:

root@debian:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 2039276 372256 1667020 19% /
...
/dev/fuse 971505664 542525440 428980224 56% /mnt

oot@debian:~# mount
/dev/fuse on /mnt type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions)

oot@debian:~# find /mnt -type f -exec md5sum ...
3b6d3b0c8c96be3395dbe138f06464a1 /mnt/linux/linux.vmdk
md5sum: /mnt/linux/linux_1-flat.vmdk: Input/output error

cp /mnt/linux/linux_1-flat.vmdk ./
cp: error reading ‘/mnt/linux/linux_1-flat.vmdk’: Input/output error
cp: failed to extend ‘./linux_1-flat.vmdk’: Input/output error

Scanning 130000 FDC entries...
vmfs_bitmap_get_entry -1vmfs_bitmap_get_entry -1vmfs_bitmap_get_entry -1Block 0x6d783f3c is used but not allocated.
Block 0x69442023 is referenced by multiple inodes:
0x01405e84 0x01c05e84
Block 0x69442023 is used but not allocated.
Data collected from inode entries:
File Blocks : 17810
Sub-Blocks : 2
Pointer Blocks : 18
Inodes : 21

./fsck.vmfs/fsck.vmfs /dev/sda3 > fsck.log

Orphaned inode 0x00000000
File Block 0x006d69c1 is lost.
File Block 0x006d6a01 is lost.
File Block 0x006d6a41 is lost.
File Block 0x006d6a81 is lost.
File Block 0x006d6ac1 is lost.
File Block 0x006d6b01 is lost.
..........multiline complain same error but different file block
File Block 0x02715fc1 is lost.
File Block 0x02716001 is lost.
Pointer Block 0x0003da43 is lost.
Pointer Block 0x1003da43 is lost.
Pointer Block 0x2003da43 is lost.
Pointer Block 0x3003da43 is lost.
..........multiline complain same error but different Pointer block
Pointer Block 0x3003e203 is lost.
Pointer Block 0x4003e203 is lost.
Unallocated blocks : 2
Lost blocks : 527389
Undefined inodes : 0
Orphaned inodes : 1
Directory errors : 0

root@debian:/home/partimag/dev/vmfs-tools# stat -f /mnt
File: "/mnt"
ID: 0 Namelen: 0 Type: fuseblk
Block size: 1048576 Fundamental block size: 1048576
Blocks: Total: 948736 Free: 418926 Available: 418926
Inodes: Total: 130000 Free: 129979

root@debian:/home/partimag/dev/vmfs-tools# stat /mnt/linux/linux_1-flat.vmdk
File: ‘/mnt/linux/linux_1-flat.vmdk’
Size: 536870912000 Blocks: 1048576000 IO Block: 4096 regular file
Device: 16h/22d Inode: 25190020 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2013-10-29 22:25:29.000000000 +0000
Modify: 2013-10-29 08:56:02.000000000 +0000
Change: 2013-10-03 23:12:58.000000000 +0000
Birth: -

@stevenshiau
Copy link

Dear vmfs-tools developers,
In Clonezilla project we have such an issue which Thomas mentioned. ArtieMan, a Clonezilla user, has did some of tests and knew how to reproduce the issue. He posted some of his results here:
http://sourceforge.net/p/partclone/discussion/638475/thread/806d84cb/
Hope this helps to solve the issue.
Thanks.

Steven.

@danje
Copy link

danje commented Feb 21, 2014

Hi,
Has there been any progress with this issue? It'd be great to have pointer block support for large files in VMFS5. If I run vmfs-fuse in debug mode (with -d flag) then when I try to read a large VMDK file (>256GB?), vmfs-fuse errors with "VMFS: unknown block type 0x03" and the shell reports a "Input/output error". A quick glance at the source suggests that the fault lies in the "vmfs_file_pread" function (vmfs_file.c) - the switch doesn't know how to handle VMFS_BLK_TYPE_PB. What would be involved in adding pointer block support?

Update: I've just read the blog post for the 0.2.5 release and see that the 256GB limit is a known feature. Are there any plans to support larger files?

@mlsorensen
Copy link

There's a great breakdown of what's going on here over at sourceforge:

https://sourceforge.net/p/partclone/discussion/638475/thread/806d84cb/#9a50

It looks like we're missing the handling of pointer blocks (VMFS_BLK_TYPE_PB), so files > 256GB return io errors.

http://cormachogan.com/2013/11/15/vsphere-5-5-storage-enhancements-part-2-vmfs-heap/

@mlsorensen
Copy link

I think I have a fix for this. I'm doing some testing and will open a pull request: https://github.com/mlsorensen/vmfs-tools

Better late than never :-) Please test if you're interested and post bugs to my repo.

@danje
Copy link

danje commented May 23, 2015

@mlsorensen That would be awesome. I spent a little while trying to implement pointer block support about a year go but didn't have the time to get it working properly. We've been using vmfs-tools at work as part of a toolkit we developed for recovering files from within VMDKs on SAN snapshots of VMFS volumes, but hit the 256GB bug/feature/limit when dealing with some of our bigger VMs. I'd be more than happy to test your modified code.

@mlsorensen
Copy link

Ok, try it out and let me know. I've so far been able to fill a 500GB vmdk on vSphere 5.5 with 300GB of files and read it with vmfs_tools with all md5s verified correctly.

@Thomas-Tsai
Copy link
Author

Thank you very much.
I can confirm it work well right know. I copy some iso and cat some together, the largest is 218G

-rw-r--r-- 1 root root 511M Dec 1 03:15 ./ISO/drbl-live/stable/drbl-live-xfce-2.3.1-6-amd64.iso
-rw-r--r-- 1 root root 518M Dec 1 03:16 ./ISO/drbl-live/stable/drbl-live-xfce-2.3.1-6-i586.iso
-rw-r--r-- 1 root root 519M Dec 1 03:16 ./ISO/drbl-live/stable/drbl-live-xfce-2.3.1-6-i686-pae.iso
-rw-r--r-- 1 root root 2.5G Dec 1 03:17 ./ISO/FreeBSD/FreeBSD-9.1-RELEASE-amd64-dvd1.iso
-rw-r--r-- 1 root root 218G May 25 06:28 ./test.iso
-rw-r--r-- 1 root root 3.8G May 25 04:03 ./debian-iso-dvd/debian-8.0.0-amd64-DVD-1.iso
-rw-r--r-- 1 root root 4.4G May 25 04:12 ./debian-iso-dvd/debian-8.0.0-amd64-DVD-2.iso
-rw-r--r-- 1 root root 4.4G May 25 04:20 ./debian-iso-dvd/debian-8.0.0-amd64-DVD-3.iso

and the all md5sum is correct.

mlsorensen pushed a commit to mlsorensen/vmfs-tools that referenced this issue May 26, 2015
    glandium#12

This provides read support for files > 256G, due to vSphere 5
adding double indirect block pointers. It uses a double indirect
lookup if the file has a blocksize of 1M and is over the VMFS size
threshold for using double indirect blocks. Perhaps there's a cleaner
way of determining the use of double indirect from the inode.

We may also want to implement a block pointer cache like VMware introduced
with this feature, however given the use cases of this software it may
not be necessary.
@mlsorensen
Copy link

Well, that at least means nothing was broken. You wouldn't hit my patch until 256G size. I haven't tested writing at all.

I found a bug in my code at 1T, I needed a modulus where the upper pointers are traversed. See the update in my repo.

@danje
Copy link

danje commented May 27, 2015

I've done a bit of testing this morning. I mounted a 1TB VMFS volume from a SAN snapshot, then mounted a 720GB VMDK file (stored on that volume) containing an NTFS filesystem. I was able to browse the contents and successfully open files. :-) Mounting a VMDK over 256GB failed on the 0.2.5 release but your pointer block fix seems to have done the trick! Happy days.

mlsorensen pushed a commit to mlsorensen/vmfs-tools that referenced this issue May 27, 2015
    glandium#12

This provides read support for files > 256G, due to vSphere 5
adding double indirect block pointers. It uses a double indirect
lookup if the file has a blocksize of 1M and is over the VMFS size
threshold for using double indirect blocks. Perhaps there's a cleaner
way of determining the use of double indirect from the inode.

We may also want to implement a block pointer cache like VMware introduced
with this feature, however given the use cases of this software it may
not be necessary.
@carlosgs83
Copy link

The fix provided by @mlsorensen should be included in this repository because official distributions (like Ubuntu) are still using this version. By the way, I couldn't manage to compile the fix on Ubuntu, but it worked perfectly on CentOS.

Big thanks for both.

@knackko
Copy link

knackko commented Nov 14, 2017

@carlosgs83 : you succeed to compile it on what centos version? can you share your binaries?

@carlosgs83
Copy link

@knackko I only used this to rescue a not working ESXi machine and I successfully done it time ago. No special versions of CentOS needed, I remember that I managed this with live DVD.

@petertuharsky
Copy link

Hallo,
I'm trying to copy 1TB file from VMFS and getting Read Error. Isn't the patch still merged to the branch?
Debian 12, VMFS-tools v. 0.2.5

@ThomasChr
Copy link

This issue was mentioned in this interesting article about recovering an encrypted file system: https://medium.com/@DCSO_CyTec/unransomware-from-zero-to-full-recovery-in-a-blink-8a47dd031df3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants