Yes, this is sad, but true.

It just happened to me that I migrated between two 2TB size PVs. One on an old and slow FC storage, the other one a fast and new.

Steps (as usual):

# sdl1 is the new PV (let's create it)
pvcreate /dev/sdl1 

# extend the VG with this new PV
vgextend somevg /dev/sdl1

# now move off from the old one
pvmove /dev/sdd1 # with -n one can tell which LV should be moved

# the old PV can be removed with
vgreduce somevg /dev/sdd1

# and this a useful command to find which LV is on which PV
# lvs --segments -o +pe_ranges

And this is it! At least this should work and it worked for me before, but not now.

Instead what I got is:

I/O error in filesystem ("dm-1") meta-data dev dm-1 block 0xe0e280       ("xfs_trans_read_buf") error 11 buf count 4096
xfs_force_shutdown(dm-2,0x1) called from line 395 of file /build/buildd/linux-lts-backport-natty-2.6.38/fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffffa0111943
xfs_imap_to_bp: xfs_trans_read_buf()returned an error 5 on dm-2.  Returning error.
Filesystem dm-2: I/O Error Detected.  Shutting down filesystem: dm-2
Please umount the filesystem, and rectify the problem(s)
Filesystem dm-2: xfs_log_force: error 5 returned.
xfs_force_shutdown(dm-2,0x1) called from line 1111 of file /build/buildd/linux-lts-backport-natty-2.6.38/fs/xfs/linux-2.6/xfs_buf.c.  Return address = 0xffffffffa011cc03

(I suppressed the same messages.)

I had to shutdown the application (which was a pain for the users) and let is continue its work. Here is the explanation:

[dm-devel] [BUG] pvmove corrupting XFS filesystems (was Re: [BUG] Internal error xfs_dir2_data_reada_verify)

Hopefully after it finished and I rebooted the system it came up OK. 7 hours of downtime. At least I started it late.