4
votes
Why duplication of links is possible in Tildes?
AFAIK, reddit does not allow duplication of same link. In Tildes, you can repost same link. Is it deliberate choice or its' discussions postponed?
IMO, after a time period (ex. 1 month) same link can be shared. In this way, some of users that missed link, or a new discovery in the link, can be subjected to interested ones.
Also I really like reddit's other contexts feature for the same link.
I have always found the complaints about reposts (especially reposts from months ago) to be somewhat confusing - not everyone sees the same stuff that other people have seen when things drift off of the front page. With something that is more permanent, like some sort of wiki or static reference page, it makes sense not to have the same content duplicated at multiple points on the same document - but with a link aggregator like reddit if nobody sees the first post, or if only half of the userbase or community sees it, once it gets slid beyond the average user's check-range, if it cant be reposted then it's gone and those users will never see it.
Maybe it's a demographic thing - like the people who are actually bothered by reposts are those who spend a great deal more time on the link aggregator site than others. I know there's an issue with people who re-post content for karma in order to gain celebrity status or whatever (e.g. gallowboob) and spammers reposting content in order to accrue karma to avoid automated spam-deterrent systems which check for low-karma. But these people are cheaters.
So this brings us to the question of whether we should implement policies and mechanisms that make normal users' interactions with the site more annoying in order to handle the misconduct of cheaters.
I'm not sure what the solution is, but just because reddit does it doesn't mean that it's the right policy.
The solution here on tildes is simple - tag the submission as a 'repost', and people who don't like reposts can eventually filter out what's been tagged that way and never see it.
The site itself will do repost detection, though - probably something like reddit where it'll show the old links, so the user can decide if it should be posted again or not (yesterday vs a year ago). I expect the user can go ahead and post it anyway but they will know it's a repost, and the site could even force the repost tag to appear automatically.
I was thinking about this as well just yesterday. The website should automatically tag a post as "repost" if the content has been already posted in the last <insert time span to be decided>.
Also maybe discourage / let the poster knows that the same link has been posted already in a certain group X numbers of days before.
Haha, what did you actually write here? Something like
<insert time span to be decided>
? I think my markdown parser/filter is doing some very strange things with things it thinks are HTML tags.Yep, that's exactly what happened :D
Short answer: because this is an alpha and simply hasn't been implemented yet. ;)
Long answer: We plan to implement it through various means. First by relying on canonical link element comparison. Second by doing custom comparisons for popular sites that do not support canonical links.
e.g. Youtube, so youtube.com/v/111111 gets recognized as the same submission as youtube.com/watch?v=111111 and youtube.com/watch?v=111111?ref=website.com, etc.
However I don't know if the plan is ever to set a global "repost rate" restriction, although that is something we have talked about by perhaps allowing trusted users to set their particular group's "repost rate".
That is also coming to ~ eventually too although we have been discussing other potential options as well.
Small note: youtube is a bad example here, as (the last time I checked) there was no automated way to go from an old style channel url, e.g.:
https://www.youtube.com/user/arghdos
to a new style channel url, e.g.:
https://www.youtube.com/channel/UC9TOJlW5ZLaiWdMjAUoTpqQ
It's easy enough to compare https://www.youtube.com/user/channelname to
<span id="channel-title">channelname</span>
from the /channel/ID#### link... we will just need to do some scraping, is all.True, I've always gone through their API though (which does not do that) :|
Hah. That does not surprise me in the least when it comes to YouTube. But yeah, @deimos has talked in the past about using scrapers to get metadata we need, since not every site has an API and even those that do tend to be somewhat rudimentary and limited. The plan is also to use external databases like thetvdb's API to assist us in that regard as well.
They tend to have their (shitty) quirks as well, e.g., soundcloud blacklisted (some) queries from OVH based servers (which we use on l2t). Of course, they're about to go down the drain, but that's about what you can expect.
Bandcamp straight up shut down their API a few years ago when I was developing /u/centralscruuutinizer/. Spotify has had an issue for 3 years where you can't get the play count (which we would use for filtering on l2t, we might eventually get around to doing artist followers) through the Web API.
shrug