Some thoughts about responsibility, ethics, freedom of expression and children.
After the Yahoo/Flickr debacle, lot of old Flickerites started a new quest for the Promise Land.
I'm pretty sure that this fact is one of the reasons of the growth of Ipernity and I'm almost sure Team Ipernity knows about it: they were in the right place at the right time.
So I'm likewise sure they had long, long discussions before facing this hot topic.
We have seen their friendly face until now, but they are also businessmen and they have to protect their business and money. So I try to confront this difficult theme without judging the timing of their decision, even if I am a little bit disappointed by the fact that there aren't new laws to abide to.
First of all: I'm grateful to have the possibility to discuss about the implementation, but if I was a paying member I'd be a little upset by this change of rules after I gave out my money; so I can understand that paying members can have a more radical position.
I'm aware that each community have to have its laws (and its punishments for the transgressors: laws are ineffectives without the certainty of the punishement, as we know very well in Italy...). As an international community we are subject to international laws and as a French enterprise Ipernity is subject to French and EU laws. So for me is adamant that every trasgression to those laws can be punished and that everyone of us will decide how much risk we want to have to fight against stupid laws (yes: there are stupid laws, IMHO).
That said, I want to venture on the usefulness of a filtering system.
A filtering system, IMHO, is effective if it is useful, if it can't be misused and if it can be applied in a timely manner.
I try to test the Team proposal against those prerequisites.
Is it useful? It can be of any use if it can help us in choosing our way of taking part to the community life. To this effect, everyone have to accept it and respect it and from the first discussions I have read it seems to me it isn't the case.
Can it be misused? I don't know. I haven't understood who will enforce it. If I can judge the works of my Ipernity peers, it can be misused, as we experienced on Flickr. Shots and photographer were censored without any reason and they had the possibility to defend only after being censored. I remember Rebekka and our Ipernity friends Christine Lebrasseur and Don Gato, to make some examples.
If the judgement is made by the Team, I'm sorry to say that I suspect we will come to the third condition, the possibility to apply it in a timely manner: can the Team review in a timely manner all the streams? nope! can the Team review in a timely manner all the flagged streams or shots? perhaps, but this raises another question: how effective can be a self disciplined rating?
I can understand the position of all the parents worried for what can be seen by their children, but I feel that self imposed filtering can't aid: the great majority of who uploads questionable content will not bother to correctly rate it, in my experience.
How can I trust a self imposed filtering? How can I worry about my Ipernity network contents, when I daily impact on questionable content in every Google search?
Is it a real problem if my child sees a thumbnail of a "bad" shot or the problem is that Iìm embarrassed to explain to my child that life can also be dangerous and that not all the people have the same ideas and the same ethics of her parents?
In my opinion we, as parents, already have all the features we need to protect our children: have we all the "features" to protect ourselves? I mean: is it a problem of ipernity features or it is a problem of not having been taught how to be real parents?
Mt personal reply is that no filtering system can give us aid in protecting our children: only our ability to talk to them and to educate them can help.
So, if not to protect children, what can be another reason to have a filtering system, a self disciplined filtering system (where self can be me or my community)? I don't know. It can be, perhaps, a way to choose content, but it can't be more effective than tagging or search, in my opinion.
So the only reason I see for this filtering system is for the Team to abide with laws.
- I don't trust self disciplined rating
- I don't trust automatic decisions about content
- I don't think human revision of content can be a viable solution in the long run
Sorry Team! This time I feel it's you who have to decide and to take the responsibility of your decisions: abide to the laws and risk to lose (part of) the community (and of the business) or trust the community and risk to be fined by the law (and to lose part of your business).