Discussion:
File copying in ML
(too old to reply)
Bruce Horrocks
2012-08-11 23:08:08 UTC
Permalink
I have an SD card from a camera with about 300 photos on - just over a
gig in total - of which about 8 consecutive files/pictures in the middle
are corrupted. The camera can't read them nor can the Macbook so
probably a dodgy SD card.

That isn't the problem.

The problem is that selecting all and copying to a folder on the HD
fails at the first of the corrupt files. In best Windows fashion, the
copy just stops with some files copied and some not and no indication to
say where it got to and how to restart/recover.

Has MacOS always been like this? I thought SL (and maybe earlier)
continued on to copy the remaining files, only omitting the genuinely
corrupt ones.
--
Bruce Horrocks
Surrey
England
(bruce at scorecrow dot com)
Graham J
2012-08-12 07:41:28 UTC
Permalink
Post by Bruce Horrocks
I have an SD card from a camera with about 300 photos on - just over a
gig in total - of which about 8 consecutive files/pictures in the middle
are corrupted. The camera can't read them nor can the Macbook so
probably a dodgy SD card.
That isn't the problem.
The problem is that selecting all and copying to a folder on the HD
fails at the first of the corrupt files. In best Windows fashion, the
copy just stops with some files copied and some not and no indication to
say where it got to and how to restart/recover.
Has MacOS always been like this? I thought SL (and maybe earlier)
continued on to copy the remaining files, only omitting the genuinely
corrupt ones.
But Windows now has Robocopy, which can be told how to deal with
failures and whether or not to proceed in hte face of an error, and will
write a log file of what it has done.

Given that the Mac is built on Linux it ought to be possible to to write
a script using the command line to copy everything that can be reaad ...
--
Graham J
Jaimie Vandenbergh
2012-08-12 08:46:14 UTC
Permalink
Post by Graham J
Given that the Mac is built on Linux
Second cousins, at closest. Macs run a Unix system, sort of.

Cheers - Jaimie
--
You're only young once, but you can remain immature indefinitely.
Calum
2012-08-12 16:05:25 UTC
Permalink
Post by Jaimie Vandenbergh
Post by Graham J
Given that the Mac is built on Linux
Second cousins, at closest. Macs run a Unix system, sort of.
10.8 is UNIX 03 certified, so there isn't really any "sort of" about it.
--
Xbox: GallusNumpty Steam: scottishwildcat
Jaimie Vandenbergh
2012-08-12 16:18:00 UTC
Permalink
On Sun, 12 Aug 2012 17:05:25 +0100, Calum
Post by Calum
Post by Jaimie Vandenbergh
Post by Graham J
Given that the Mac is built on Linux
Second cousins, at closest. Macs run a Unix system, sort of.
10.8 is UNIX 03 certified, so there isn't really any "sort of" about it.
Depends how you think about it. Yes, it's certified and behaves like
various BSD's from a user's point of view; but it's different enough
from a developers POV that it's fun to port to/from, and it's hardly
the same at all from a sysadmin/unix hacker's POV. It's not directly
based on the original UNIX code from Berkeley or the System V derived
AT&T tree. The kernel (UNX) is a strange hybrid of a MACH microkernel
with a bunch of BSD and FreeBSD code embedded to make it a medium
sized kernel. All quite peculiar.

Cheers - Jaimie
--
"Those are my principles. If you don't like them, I have others."
- Groucho Marx
Chris Ridd
2012-08-12 16:31:01 UTC
Permalink
Post by Jaimie Vandenbergh
On Sun, 12 Aug 2012 17:05:25 +0100, Calum
Post by Calum
Post by Jaimie Vandenbergh
Post by Graham J
Given that the Mac is built on Linux
Second cousins, at closest. Macs run a Unix system, sort of.
10.8 is UNIX 03 certified, so there isn't really any "sort of" about it.
Depends how you think about it. Yes, it's certified and behaves like
various BSD's from a user's point of view; but it's different enough
from a developers POV that it's fun to port to/from, and it's hardly
the same at all from a sysadmin/unix hacker's POV. It's not directly
That's all true, but doesn't change the fact it is still UNIX. You
still have to port between systems that are both called UNIX, if only
because the specs have lots of optional bits.
--
Chris
Jaimie Vandenbergh
2012-08-12 16:36:04 UTC
Permalink
Post by Chris Ridd
Post by Jaimie Vandenbergh
On Sun, 12 Aug 2012 17:05:25 +0100, Calum
Post by Calum
Post by Jaimie Vandenbergh
Post by Graham J
Given that the Mac is built on Linux
Second cousins, at closest. Macs run a Unix system, sort of.
10.8 is UNIX 03 certified, so there isn't really any "sort of" about it.
Depends how you think about it. Yes, it's certified and behaves like
various BSD's from a user's point of view; but it's different enough
from a developers POV that it's fun to port to/from, and it's hardly
the same at all from a sysadmin/unix hacker's POV. It's not directly
That's all true, but doesn't change the fact it is still UNIX. You
still have to port between systems that are both called UNIX, if only
because the specs have lots of optional bits.
To be fair, I did say it is a Unix system up top.

It's just a pretty funny one, odder than Solaris but less odd than
AIX. It's about the HP-UX level, I'd say.

Cheers - Jaimie
--
It is difficult to say what is impossible, for the dream of yesterday
is the hope of today and the reality of tomorrow. -- Robert Goddard
Ian McCall
2012-08-12 09:07:27 UTC
Permalink
Post by Graham J
Post by Bruce Horrocks
I have an SD card from a camera with about 300 photos on - just over a
gig in total - of which about 8 consecutive files/pictures in the middle
are corrupted. The camera can't read them nor can the Macbook so
probably a dodgy SD card.
Given that the Mac is built on Linux it ought to be possible to to
write a script using the command line to copy everything that can be
reaad ...
Yep. Are they all just in a single folder? If so, use the following in Terminal
find "(drag directory from which you want to copy things from)/*"
-type f -exec cp {} (drag directory to which you want to copy things
to) \;

eg. suppose I want to copy to a folder called "From Card" in my
pictures directory, and the card was called "Camera" with a folder
"Pictures" on it…
find "/Volumes/Camera/My\ Pictures/*" -type f -exec cp {}
/Users/Pictures/From\ Card \;

(that last bit, \; you will have to type yourself).

One quick subtlety to notice - there are quotes around the first
directory but not round the second. You'll have to type those quotes
yourself, and they -are- necessary - it's to stop Bash in Terminal
expanding all the filenames from * before the find command gets hold of
it. The escaping of spaces "\ " is done automatically when dragging the
folder from the Finder - technically it's not necessary if you already
have quotes around the directory, but it does no harm either so easiest
to leave it alone.


Cheers,
Ian
--
Check out Proto the album: <http://studioicm.com/proto/>
Bruce Horrocks
2012-08-12 18:13:41 UTC
Permalink
eg. suppose I want to copy to a folder called "From Card" in my pictures
directory, and the card was called "Camera" with a folder "Pictures" on it…
find "/Volumes/Camera/My\ Pictures/*" -type f -exec cp {}
/Users/Pictures/From\ Card \;
Thanks Ian (and Jamie in another post) for the suggestions. Being a
camera card, the files were sequentially numbered so easy to track where
it had got to. I should have made clearer that I was moaning in the more
general sense: that I thought this kind of unhelpful OS (or any software
for that matter) behaviour had been consigned to the bin of history.

I can see I'm getting too pessimistic in my old age. ;-)
--
Bruce Horrocks
Surrey
England
(bruce at scorecrow dot com)
Jaimie Vandenbergh
2012-08-12 20:34:43 UTC
Permalink
On Sun, 12 Aug 2012 19:13:41 +0100, Bruce Horrocks
Post by Bruce Horrocks
eg. suppose I want to copy to a folder called "From Card" in my pictures
directory, and the card was called "Camera" with a folder "Pictures" on it…
find "/Volumes/Camera/My\ Pictures/*" -type f -exec cp {}
/Users/Pictures/From\ Card \;
Thanks Ian (and Jamie in another post) for the suggestions. Being a
camera card, the files were sequentially numbered so easy to track where
it had got to. I should have made clearer that I was moaning in the more
general sense: that I thought this kind of unhelpful OS (or any software
for that matter) behaviour had been consigned to the bin of history.
I can see I'm getting too pessimistic in my old age. ;-)
I'm less sure about what I said earlier, I'm thinking there was an
error box which had a "continue" or perhaps "skip" option in an
earlier OSX release... is that what you were thinking of?

I don't have filesystem/disk problems often enough to see error panels
much these days!

Cheers - Jaimie
--
"If I'd been the Green Goblin, I'd have got a big bath and lured
Spiderman into it, and being a spider he wouldn't have been able
to climb out. Muahahaha." -- Paul Clark, urs
Ian McCall
2012-08-12 20:51:14 UTC
Permalink
On 2012-08-12 20:34:43 +0000, Jaimie Vandenbergh
Post by Jaimie Vandenbergh
I don't have filesystem/disk problems often enough to see error panels
much these days!
I'm having bucket loads of them at the moment. I am going through the
entirely unenviable process of copying everything off my Drobo,
reformatting, then copying things back on again (damn both Time Machine
for having different local and network attached formats and also a rare
black mark from me to Drobo for not supporting non-destructive
repartitioning). I'm copying 2.8Tb around - it's the reason I've been
at the computer and posting a lot at the moment, because I'm overseeing
this particular bundle of joy.

I've had device hassles - see the 'nuke a badly behaved disk' thread.
I've had lots and lots of USB oddness from both the iMac and the
(powered) Kensington hub I use - I think I'm simply overloading it
somehow by asking it to do its job and shift a sustained 500Gb-a-shot
copy through it. Many, many failures of disk enclosures, I've had one
500Gb disk conk out and die in the middle of it, I've had Disk Utility
bugs where it never understood it had successfully deleted a partition
and needed to be force-quit…

You'd think the job would be simple, but it really really hasn't been.
The one upside is actually the Drobo - the lack of dynamic partitioning
has been annoying, but the device itself has performed without a single
hiccup all the way through this. I wonder if I would be saying the same
had I hooked it up via USB? I don't really trust the iMac and USB disks
anymore.

Flaming Time Machine and its daft network'isms. The whole reason for
this is that I had a 1Tb partition for Time Machine - fine when I had a
500Gb drive, useless with a 1Tb drive. I'm now going have just a single
partition and use the trick to force sparesbundle use even when
attached locally. There are disadvantages to that approach, but the
advantage of backup portability far, far outweighs them.


Cheers,
Ian
--
Check out Proto the album: <http://studioicm.com/proto/>
Jaimie Vandenbergh
2012-08-12 22:04:45 UTC
Permalink
Post by Ian McCall
On 2012-08-12 20:34:43 +0000, Jaimie Vandenbergh
Post by Jaimie Vandenbergh
I don't have filesystem/disk problems often enough to see error panels
much these days!
I'm having bucket loads of them at the moment. I am going through the
entirely unenviable process of copying everything off my Drobo,
reformatting, then copying things back on again (damn both Time Machine
for having different local and network attached formats and also a rare
black mark from me to Drobo for not supporting non-destructive
repartitioning). I'm copying 2.8Tb around - it's the reason I've been
at the computer and posting a lot at the moment, because I'm overseeing
this particular bundle of joy.
This sort of data quantity (I NAS much the same amount) is why I'm
using FreeNAS with ZFS... it actively hunts out going-bad data and
tries to sort it, or at least tells you about it, and doesn't just
give up when it comes across some. Unfortunately not very resizeable
either, although it has easy ways of making one volume appear to be
split in variable ways, which is just as good.

Don't envy you though; last time I had to do a full clone it took
about four days (rsync). Straight after which there was an update to
FreeNAS that revealed ZFS remote cloning in the UI... I'll do that
next time.
Post by Ian McCall
I've had device hassles - see the 'nuke a badly behaved disk' thread.
I've had lots and lots of USB oddness from both the iMac and the
(powered) Kensington hub I use - I think I'm simply overloading it
somehow by asking it to do its job and shift a sustained 500Gb-a-shot
copy through it. Many, many failures of disk enclosures, I've had one
500Gb disk conk out and die in the middle of it, I've had Disk Utility
bugs where it never understood it had successfully deleted a partition
and needed to be force-quit…
You'd think the job would be simple, but it really really hasn't been.
The one upside is actually the Drobo - the lack of dynamic partitioning
has been annoying, but the device itself has performed without a single
hiccup all the way through this. I wonder if I would be saying the same
had I hooked it up via USB? I don't really trust the iMac and USB disks
anymore.
I never did, USB is the lowest common denominator and behaves like it.
It really isn't suitable for intensive continuous data transfer.
Post by Ian McCall
Flaming Time Machine and its daft network'isms. The whole reason for
this is that I had a 1Tb partition for Time Machine - fine when I had a
500Gb drive, useless with a 1Tb drive. I'm now going have just a single
partition and use the trick to force sparesbundle use even when
attached locally. There are disadvantages to that approach, but the
advantage of backup portability far, far outweighs them.
That's more a daft notnetworkism, then! I much much prefer the
arsebundle version too, having all those files sat there naked and
unchecksummed on an HFS drive scares me.

Cheers - Jaimie
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison
Ian McCall
2012-08-12 23:33:56 UTC
Permalink
Post by Jaimie Vandenbergh
Post by Ian McCall
On 2012-08-12 20:34:43 +0000, Jaimie Vandenbergh
Post by Jaimie Vandenbergh
I don't have filesystem/disk problems often enough to see error panels
much these days!
I'm having bucket loads of them at the moment. I am going through the
entirely unenviable process of copying everything off my Drobo,
reformatting, then copying things back on again (damn both Time Machine
for having different local and network attached formats and also a rare
black mark from me to Drobo for not supporting non-destructive
repartitioning). I'm copying 2.8Tb around - it's the reason I've been
at the computer and posting a lot at the moment, because I'm overseeing
this particular bundle of joy.
This sort of data quantity (I NAS much the same amount) is why I'm
using FreeNAS with ZFS... it actively hunts out going-bad data and
tries to sort it, or at least tells you about it, and doesn't just
give up when it comes across some. Unfortunately not very resizeable
either, although it has easy ways of making one volume appear to be
split in variable ways, which is just as good.
Yep - the problems I'm having aren't with the disks in the Drobo, they're
with the USB 2.0 single disks hanging off the machine that I've copied the
Drobo contents to. A 2Tb one I borrowed which took four days to copy
everything to, including restarting after multiple failures. An external
caddy that turned out to be junk. A 500Gb drive going click-of-death bad on
me during use...even now I have a USB device error that I can't fix for a
while as it will need a machine restart and I've got a ton of data copying
that won't finish for the next ten hours or more. I'm dreading copying the
2Tb disk contents over - bet it will take multiple attempts. Mind that this
is a powered enclosure directly connected to the Mac, and yet is still
flakey.

The split in multiple ways is interesting - could you elaborate?
Post by Jaimie Vandenbergh
Don't envy you though; last time I had to do a full clone it took
about four days (rsync). Straight after which there was an update to
FreeNAS that revealed ZFS remote cloning in the UI... I'll do that
next time.
Four days is what I'm estimating assuming it all goes well. Which it won't.
I am on tenterhooks during this too - all my data is sat on single disks
right now, no redundancy. I hate that, and cannot wait for it to be back on
the Drobo where it belongs.
Post by Jaimie Vandenbergh
Post by Ian McCall
Flaming Time Machine and its daft network'isms. The whole reason for
this is that I had a 1Tb partition for Time Machine - fine when I had a
500Gb drive, useless with a 1Tb drive. I'm now going have just a single
partition and use the trick to force sparesbundle use even when
attached locally. There are disadvantages to that approach, but the
advantage of backup portability far, far outweighs them.
That's more a daft notnetworkism, then! I much much prefer the
arsebundle version too, having all those files sat there naked and
unchecksummed on an HFS drive scares me.
Yep, I would have thought it was time to phase out the local format now.
Quite apart from anything else, there must be duplicate code paths in there
that could be simplified for maintenance.


Cheers,
Ian
--
Check out Proto the album: <http://studioicm.com/proto/>
Jaimie Vandenbergh
2012-08-13 00:29:23 UTC
Permalink
Post by Ian McCall
The split in multiple ways is interesting - could you elaborate?
Sure - note that this is how FreeNAS simplifies things in its gui and
also with a slightly old version of ZFS; there's a lot more possible
direct on the command line and with a modern ZFS release.

So first you group 1+ disks into one or more volumes, in the usual
way. The volume is usable directly.

Then you can optionally subdivide a volume into 'datasets', which for
all intents and purposes act as if the volume is partitioned, except
with freely-floating size (optional quotas and/or min size) - until
you run out of disk space, obviously.

At the moment I've got my 6Tb Volume1 split into 'Data' and
'TimeMachine' datasets, with TimeMachine set to a max of 1Tb. But if
there was only .1 Tb in TM, then Data could contain up to 5.9Tb.

The parent volume can still hold data as well - things are arranged in
a simple path fashion, so from the command line with the ZFS
filesystem mounted under /mnt the above looks like

/mnt/Volume1/
/mnt/Volume1/Data/(more folders full of crap)
/mnt/Volume1/TimeMachine/(sparsebundles)

Each dataset or volume can have its own snapshots (for rollbacks or
history diving - I keep two weeks of daily snapshots on Data, none on
TM), and its own schedule of replication tasks to elsewhere (currently
rsynced to another NAS nightly), and various other parameters.

Wikipeeeeee has a large and interesting article on ZFS if you'd like
to look further. Unfortunately the company set up by the guy who was
porting ZFS back in Snow Leopard days appears to have just been eaten,
so the only commercial ZFS ("Zevo") may not exist soon.

Cheers - Jaimie
--
"If you can't make fun of it, it's probably not worth taking seriously"
-- http://survivingtheworld.net/Lesson494.html
Ian McCall
2012-08-13 06:05:37 UTC
Permalink
Post by Jaimie Vandenbergh
At the moment I've got my 6Tb Volume1 split into 'Data' and
'TimeMachine' datasets, with TimeMachine set to a max of 1Tb. But if
there was only .1 Tb in TM, then Data could contain up to 5.9Tb.
Interesting - can you dynamically resize those sets, e.g. suddenly decide
to make Time Machine 2Tb and do so non-destructively?

Cheers,
Ian
Chris Ridd
2012-08-13 06:47:08 UTC
Permalink
Post by Ian McCall
Post by Jaimie Vandenbergh
At the moment I've got my 6Tb Volume1 split into 'Data' and
'TimeMachine' datasets, with TimeMachine set to a max of 1Tb. But if
there was only .1 Tb in TM, then Data could contain up to 5.9Tb.
Interesting - can you dynamically resize those sets, e.g. suddenly decide
to make Time Machine 2Tb and do so non-destructively?
Yes, you can change the size of zvols (ZFS "raw" volume datasets, the
things Jaimie is talking about). "zfs set volsize=... [...]"

Your next trick is persuading HFS+ to understand it is in a bigger
"disk". I'm pretty sure Disk Utility is able to do that easily - it got
this functionality when Boot Camp came into being.
--
Chris
Jaimie Vandenbergh
2012-08-13 11:28:11 UTC
Permalink
Post by Chris Ridd
Post by Ian McCall
Post by Jaimie Vandenbergh
At the moment I've got my 6Tb Volume1 split into 'Data' and
'TimeMachine' datasets, with TimeMachine set to a max of 1Tb. But if
there was only .1 Tb in TM, then Data could contain up to 5.9Tb.
Interesting - can you dynamically resize those sets, e.g. suddenly decide
to make Time Machine 2Tb and do so non-destructively?
Yes, you can change the size of zvols (ZFS "raw" volume datasets, the
things Jaimie is talking about). "zfs set volsize=... [...]"
Aye.
Post by Chris Ridd
Your next trick is persuading HFS+ to understand it is in a bigger
"disk". I'm pretty sure Disk Utility is able to do that easily - it got
this functionality when Boot Camp came into being.
Do you mean the HFS+ inside the TM archive sparsebundles? As far as I
can tell, they get adapted by TM automagically to match the host
network share's size, which saves a bit of faffing about with DU.

The ZFS volumes/datasets themselves aren't formatted HFS.

Cheers - Jaimie
--
A problem shared is a problem halved, so is your
problem really yours or just half of someone else's?
Chris Ridd
2012-08-13 12:03:21 UTC
Permalink
Post by Jaimie Vandenbergh
Post by Chris Ridd
Post by Ian McCall
Post by Jaimie Vandenbergh
At the moment I've got my 6Tb Volume1 split into 'Data' and
'TimeMachine' datasets, with TimeMachine set to a max of 1Tb. But if
there was only .1 Tb in TM, then Data could contain up to 5.9Tb.
Interesting - can you dynamically resize those sets, e.g. suddenly decide
to make Time Machine 2Tb and do so non-destructively?
Yes, you can change the size of zvols (ZFS "raw" volume datasets, the
things Jaimie is talking about). "zfs set volsize=... [...]"
Aye.
Post by Chris Ridd
Your next trick is persuading HFS+ to understand it is in a bigger
"disk". I'm pretty sure Disk Utility is able to do that easily - it got
this functionality when Boot Camp came into being.
Do you mean the HFS+ inside the TM archive sparsebundles? As far as I
can tell, they get adapted by TM automagically to match the host
network share's size, which saves a bit of faffing about with DU.
The ZFS volumes/datasets themselves aren't formatted HFS.
Oh right, it sounded like you were exporting the raw volume. But you
can't really do that without iSCSI on the Mac :-(
--
Chris
Jaimie Vandenbergh
2012-08-13 12:29:42 UTC
Permalink
Post by Chris Ridd
Post by Jaimie Vandenbergh
Post by Chris Ridd
Your next trick is persuading HFS+ to understand it is in a bigger
"disk". I'm pretty sure Disk Utility is able to do that easily - it got
this functionality when Boot Camp came into being.
Do you mean the HFS+ inside the TM archive sparsebundles? As far as I
can tell, they get adapted by TM automagically to match the host
network share's size, which saves a bit of faffing about with DU.
The ZFS volumes/datasets themselves aren't formatted HFS.
Oh right, it sounded like you were exporting the raw volume. But you
can't really do that without iSCSI on the Mac :-(
Initiators aren't that hard to come by, but the point of the NAS round
here is to be available to the whole network so it's a lot easier to
just AFP (and SMB) share things.

Cheers - Jaimie
--
Tomorrow (noun) - A mystical land where 99of all human productivity,
motivation and achievement is stored.
-- http://thedoghousediaries.com/3474
Chris Ridd
2012-08-13 13:34:48 UTC
Permalink
Post by Jaimie Vandenbergh
Post by Chris Ridd
Post by Jaimie Vandenbergh
Post by Chris Ridd
Your next trick is persuading HFS+ to understand it is in a bigger
"disk". I'm pretty sure Disk Utility is able to do that easily - it got
this functionality when Boot Camp came into being.
Do you mean the HFS+ inside the TM archive sparsebundles? As far as I
can tell, they get adapted by TM automagically to match the host
network share's size, which saves a bit of faffing about with DU.
The ZFS volumes/datasets themselves aren't formatted HFS.
Oh right, it sounded like you were exporting the raw volume. But you
can't really do that without iSCSI on the Mac :-(
Initiators aren't that hard to come by, but the point of the NAS round
here is to be available to the whole network so it's a lot easier to
just AFP (and SMB) share things.
I'm only aware of two - the GlobalSAN one and the AttoTECH one - and
they're both commercial. Any free or open source ones I've missed?
--
Chris
Jaimie Vandenbergh
2012-08-13 16:10:38 UTC
Permalink
Post by Chris Ridd
Post by Jaimie Vandenbergh
Post by Chris Ridd
Post by Jaimie Vandenbergh
Post by Chris Ridd
Your next trick is persuading HFS+ to understand it is in a bigger
"disk". I'm pretty sure Disk Utility is able to do that easily - it got
this functionality when Boot Camp came into being.
Do you mean the HFS+ inside the TM archive sparsebundles? As far as I
can tell, they get adapted by TM automagically to match the host
network share's size, which saves a bit of faffing about with DU.
The ZFS volumes/datasets themselves aren't formatted HFS.
Oh right, it sounded like you were exporting the raw volume. But you
can't really do that without iSCSI on the Mac :-(
Initiators aren't that hard to come by, but the point of the NAS round
here is to be available to the whole network so it's a lot easier to
just AFP (and SMB) share things.
I'm only aware of two - the GlobalSAN one and the AttoTECH one - and
they're both commercial. Any free or open source ones I've missed?
Gak! GlobalSAN's was free last time I used it. Grumble. If you can get
a v4, that presumably still doesn't need a license.

Cheers - Jaimie
--
The Daily Mail should be forced to print the words 'The Paper That
Supported Hitler' on its masthead, just so that there is something
that's true on the front page every day. -- Mark Thomas
Bruce Horrocks
2012-08-13 05:46:19 UTC
Permalink
Post by Jaimie Vandenbergh
On Sun, 12 Aug 2012 19:13:41 +0100, Bruce Horrocks
Post by Bruce Horrocks
eg. suppose I want to copy to a folder called "From Card" in my pictures
directory, and the card was called "Camera" with a folder "Pictures" on it…
find "/Volumes/Camera/My\ Pictures/*" -type f -exec cp {}
/Users/Pictures/From\ Card \;
Thanks Ian (and Jamie in another post) for the suggestions. Being a
camera card, the files were sequentially numbered so easy to track where
it had got to. I should have made clearer that I was moaning in the more
general sense: that I thought this kind of unhelpful OS (or any software
for that matter) behaviour had been consigned to the bin of history.
I can see I'm getting too pessimistic in my old age. ;-)
I'm less sure about what I said earlier, I'm thinking there was an
error box which had a "continue" or perhaps "skip" option in an
earlier OSX release... is that what you were thinking of?
I don't have filesystem/disk problems often enough to see error panels
much these days!
The file copy starts and the standard Finder progress bar appears. Once
it hits the first corrupt file it stops and displays an "error -36"
dialog and that's it. No resume or continue options. I thought MacOS
dropped this kind of behaviour long ago and would continue on and at
least try the other files. Perhaps it used to work and things have gone
backwards with ML?
--
Bruce Horrocks
Surrey
England
(bruce at scorecrow dot com)
Jaimie Vandenbergh
2012-08-13 11:31:27 UTC
Permalink
On Mon, 13 Aug 2012 06:46:19 +0100, Bruce Horrocks
Post by Bruce Horrocks
Post by Jaimie Vandenbergh
On Sun, 12 Aug 2012 19:13:41 +0100, Bruce Horrocks
Post by Bruce Horrocks
eg. suppose I want to copy to a folder called "From Card" in my pictures
directory, and the card was called "Camera" with a folder "Pictures" on it…
find "/Volumes/Camera/My\ Pictures/*" -type f -exec cp {}
/Users/Pictures/From\ Card \;
Thanks Ian (and Jamie in another post) for the suggestions. Being a
camera card, the files were sequentially numbered so easy to track where
it had got to. I should have made clearer that I was moaning in the more
general sense: that I thought this kind of unhelpful OS (or any software
for that matter) behaviour had been consigned to the bin of history.
I can see I'm getting too pessimistic in my old age. ;-)
I'm less sure about what I said earlier, I'm thinking there was an
error box which had a "continue" or perhaps "skip" option in an
earlier OSX release... is that what you were thinking of?
I don't have filesystem/disk problems often enough to see error panels
much these days!
The file copy starts and the standard Finder progress bar appears. Once
it hits the first corrupt file it stops and displays an "error -36"
dialog and that's it. No resume or continue options. I thought MacOS
dropped this kind of behaviour long ago and would continue on and at
least try the other files. Perhaps it used to work and things have gone
backwards with ML?
If anyone can identify when that worked, it's well worth logging as a
regression bug. Hitting up Google Images for "finder error" and
similar hasn't popped up anything with
resume/continue/skip/whatever...

Cheers - Jaimie
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet?
Chris Ridd
2012-08-12 09:57:08 UTC
Permalink
Post by Graham J
Given that the Mac is built on Linux
Er, no it isn't.
--
Chris
Rowland McDonnell
2012-08-12 16:04:22 UTC
Permalink
Graham J <***@invalid> wrote:

[snip]
Post by Graham J
Given that the Mac is built on Linux
Mac OS X descended from NeXTSTEP, which is older than Linux and
BSD-like.
Post by Graham J
it ought to be possible to to write
a script using the command line to copy everything that can be read
You're right about that bit, though.

Rowland.
--
Remove the animal for email address: ***@dog.physics.org
Sorry - the spam got to me
http://www.mag-uk.org http://www.bmf.co.uk
UK biker? Join MAG and the BMF and stop the Eurocrats banning biking
Jaimie Vandenbergh
2012-08-12 08:55:16 UTC
Permalink
On Sun, 12 Aug 2012 00:08:08 +0100, Bruce Horrocks
Post by Bruce Horrocks
I have an SD card from a camera with about 300 photos on - just over a
gig in total - of which about 8 consecutive files/pictures in the middle
are corrupted. The camera can't read them nor can the Macbook so
probably a dodgy SD card.
That isn't the problem.
The problem is that selecting all and copying to a folder on the HD
fails at the first of the corrupt files. In best Windows fashion, the
copy just stops with some files copied and some not and no indication to
say where it got to and how to restart/recover.
Has MacOS always been like this? I thought SL (and maybe earlier)
continued on to copy the remaining files, only omitting the genuinely
corrupt ones.
I'm fairly sure I had to do a binary search for the problematic files
in a similar situation back in Leopard... not 100% though, and I can't
think of a way to construct appropriate corruption to try it again
now.

Anyway, there are a few things you can try.

Diskimage - make a new image from the SD card, then copy the files out
from the image.

Finder - move instead of copy.

Terminal - create a destination folder on your desktop, then
to copy 'cp -v /Volumes/SDcard/folder/* ~/Desktop/folder/'
or to move, use mv instead of cp.
or to try rsync,
'rsync -va /Volumes/SDcard/folder/ ~/Desktop/folder/'

Cheers - Jaimie
--
"A committee is a cul-de-sac down which ideas are lured and then
quietly strangled." - Sir Barnett Cocks (1907-1989)
James Dore
2012-08-13 13:27:25 UTC
Permalink
Post by Bruce Horrocks
I have an SD card from a camera with about 300 photos on - just over a
gig in total - of which about 8 consecutive files/pictures in the middle
are corrupted. The camera can't read them nor can the Macbook so
probably a dodgy SD card.
That isn't the problem.
The problem is that selecting all and copying to a folder on the HD
fails at the first of the corrupt files. In best Windows fashion, the
copy just stops with some files copied and some not and no indication to
say where it got to and how to restart/recover.
Has MacOS always been like this? I thought SL (and maybe earlier)
continued on to copy the remaining files, only omitting the genuinely
corrupt ones.
rsync.

Open an terminal and do man rsync for info.

It's like robocopy, only better.

J
--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Continue reading on narkive:
Loading...