This is hardly surprising. We all know this is coming. (Not that that makes it OK.) What's interesting to me as a photographer is that they claim to have scraped photos from places like Facebook....
This is hardly surprising. We all know this is coming. (Not that that makes it OK.)
What's interesting to me as a photographer is that they claim to have scraped photos from places like Facebook. I'm pretty sure that goes against Facebook's terms of service, and it's definitely a violation of the copyrights on my photos. (Or would be if I had a Facebook account.) Seems like this is ripe for lawsuits. Not sure what the copyright laws are like in Australia, but I've read that they're more strict than in the US in some ways, so I'd think someone would have a case if they wanted to pursue it.
It's less clear cut than you might think - this gets into some nuanced and interesting legal territory. To make an entirely hypothetical argument, but one based on what I've heard and read from...
It's less clear cut than you might think - this gets into some nuanced and interesting legal territory. To make an entirely hypothetical argument, but one based on what I've heard and read from those with deep knowledge of the subject:
Viewing images requires a temporary copy to be made on the user's machine. This is pretty much universally accepted because it's fundamental to how the internet works.
If a user can see an image on their screen then clearly they can write a description of it, or perhaps take measurements of certain features. This isn't a copy at all, they're just writing notes for their own purposes.
In fact, why not automate the process? Rather than taking the measurements by hand, let's write a bit of software that looks at the pixels on screen for us. It's just doing the same thing the user already did, but with less manual work.
Clicking each of the links to open the image for the measurement software is getting tiresome. Let's just open the HTML and copy them as a batch; maybe add a quick regex to make filtering easier as well.
In fact, now that the links can be pulled from a page by a script, and the image processing is already automated, it can go through any publicly available list of Facebook profile URLs and run all the images from each. The data's been made public by the users, after all.
It's not an iron clad argument, but it's difficult to form a cogent objection to any of those steps that wouldn't also apply to a lot of unequivocally reasonable usage.
He can't be the only one doing it, maybe just the first to be stupid enough to publicise it. I think this horse has bolted long ago but we're only just realising.
He can't be the only one doing it, maybe just the first to be stupid enough to publicise it. I think this horse has bolted long ago but we're only just realising.
This is hardly surprising. We all know this is coming. (Not that that makes it OK.)
What's interesting to me as a photographer is that they claim to have scraped photos from places like Facebook. I'm pretty sure that goes against Facebook's terms of service, and it's definitely a violation of the copyrights on my photos. (Or would be if I had a Facebook account.) Seems like this is ripe for lawsuits. Not sure what the copyright laws are like in Australia, but I've read that they're more strict than in the US in some ways, so I'd think someone would have a case if they wanted to pursue it.
It's less clear cut than you might think - this gets into some nuanced and interesting legal territory. To make an entirely hypothetical argument, but one based on what I've heard and read from those with deep knowledge of the subject:
Viewing images requires a temporary copy to be made on the user's machine. This is pretty much universally accepted because it's fundamental to how the internet works.
If a user can see an image on their screen then clearly they can write a description of it, or perhaps take measurements of certain features. This isn't a copy at all, they're just writing notes for their own purposes.
In fact, why not automate the process? Rather than taking the measurements by hand, let's write a bit of software that looks at the pixels on screen for us. It's just doing the same thing the user already did, but with less manual work.
Clicking each of the links to open the image for the measurement software is getting tiresome. Let's just open the HTML and copy them as a batch; maybe add a quick regex to make filtering easier as well.
In fact, now that the links can be pulled from a page by a script, and the image processing is already automated, it can go through any publicly available list of Facebook profile URLs and run all the images from each. The data's been made public by the users, after all.
It's not an iron clad argument, but it's difficult to form a cogent objection to any of those steps that wouldn't also apply to a lot of unequivocally reasonable usage.
He can't be the only one doing it, maybe just the first to be stupid enough to publicise it. I think this horse has bolted long ago but we're only just realising.