Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Should we regulate the internet?

Options
1234568

Comments

  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    Forcing people to leave a papertrail?

    Not to worry, I've been working with a crack team of security and encryption specialists. Together we've created a payment method that's 100% untraceable.




  • Registered Users Posts: 2,016 ✭✭✭eightieschewbaccy


    You seem to just pick up on one point rather than the broader point which is, you have a totalitarian solution that would effective rollback decades of technological progress and limit everyone's freedoms. On top of that, it would block plenty of legit content.



  • Registered Users Posts: 7,301 ✭✭✭jmcc



    While some of the technological problems might be solved, the problems of governments, civil society groups, NGOs, power hungry politicians and clueless do-gooders would make implementing such an system, even on a national level, very difficult. Getting intergovernmental cooperation on something like this would one of the hardest problems to solve due to multiple jurisdictions. There is a lot of discussion about these problems among governments, law enforcement and the people who run the Internet. There's a learning curve because people get replaced or move jobs and the replacement then has to be brought up to speed.

    Regards...jmcc

    Post edited by Boards.ie: Paul on


  • Registered Users Posts: 1,620 ✭✭✭Economics101


    This goes back a long way: "Quis custodiet ipsos custodes?" Juvenal.



  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    I wonder if those pushing the 'parental responsibility' argument are actually parents or is it just convenient way to fob the problem off.

    Being in control of your own child's internet usage very much doesn't protect them from the harm being done to others. I shared an article earlier on the rise in child-on-child abuse linked to children accessing violent pornography. Similarly it wouldn't protect them from being the victim of online bullying which studies are showing is significantly more harmful than traditional bullying. I'll see if I can find the links again but there does seem to be evidence showing that teenagers suffering from mental health problems now face worse outcomes in general because of what they're exposed to online.

    It seems the 'parental responsibility' advocates seem to think it's ok for some children to be protected from harm online but not others? It seems it's ok to harm children whose parents might not be involved enough, able to afford, or tech-savvy enough to implement home online security measures. That's if we accept these measure are effective (a good proportion here claiming any type of control will be easily bypassed), and ignore the fact that kids will still be able to access what they want in school or when socializing.



  • Advertisement
  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    It only blocks content from children. Adults are free to access whatever they wish.

    I think it's a quite practical solution. @[Deleted User] offered a really useful childnet solution but how would you ensure that kids only used this? Providing it as an option for children to use wouldn't be very effective.

    Another measure often suggested for controlling the harm done online is to make tech legally responsible for what they provide, in the same way that traditional publishers and broadcasters are. I think such a solution would in practice be far more restrictive as companies would simply remove all content potentially legally actionable, to the point there would only be a sanitized, restricted internet for everyone.

    Post edited by Boards.ie: Paul on


  • Moderators, Politics Moderators Posts: 38,885 Mod ✭✭✭✭Seth Brundle


    You've said that under your proposal, social media giants such as Facebook, Tik Tok, etc would be whitelisted.

    Would your proposals end the bullying that occurs on these platforms and how?

    Would it prevent inappropriate content from being shared and how?

    Post edited by Boards.ie: Paul on


  • Registered Users Posts: 34,434 ✭✭✭✭o1s1n
    Master of the Universe


    'Parental responsibility' transcends internet monitoring.

    You are saying 'but what about the kids with irresponsible parents who don't care, why are they being left behind?'

    Kids with parents like that have bigger issues than what they see on the internet. If we were to base all of our decisions on what is allowed because some people have bad parents, then we'd be banning things all over the place.

    If you have a child, you are responsible for making sure they know how to cross the road safely, not run with scissors, not play with knives, not talk to strangers and not look at bad stuff of the internet.

    Some peoples parents are bad at doing the above. It doesn't mean we ban knives/roads/the internet.



  • Registered Users Posts: 20,929 ✭✭✭✭Ash.J.Williams


    Isn't the regulation idea only in the news because a referendum went the wrong way and nothing to do with child welfare?



  • Registered Users Posts: 34,434 ✭✭✭✭o1s1n
    Master of the Universe


    No this is just MegamanBoo wanting to ban over 99% of the internet because 'kids' or something. 😁



  • Advertisement
  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    I they were not properly moderated they would be removed from the whitelist.

    In practice I think any social media giant would have to create a child friendly version to remain on the whitelist.

    Doing so would allow them to apply appropriate controls for children if commercially viable, without having to provide this same level of restriction for adults.

    To the best of my knowledge AI tools are now pretty good at spotting harmful content, I believe the reason they're not fully deployed on social media sites is that they are not fully accurate and will inadvertently block some harmless content, and because they might block content which is appropriate for adults but not for children.



  • Registered Users Posts: 495 ✭✭Yeah Right


    Why didn't you respond to the 2nd and 3rd part of my posts? Why not answer those questions? Is it because you felt you have an answer to the first part so dived right in, but you've nothing in response to the rest of the post so you're hoping nobody will notice you skipped over it? I see that JMCC Seth Brundle has asked you the same questions, basically, are you going to ignore him too?

    You're also seriously misguided if you think that every one of the vulnerable people I mentioned in my previous post has access to cash. There are plenty of people in serious situations, domestically, living with controlling parents/spouses/family members who restrict everything that you and I take for granted. Telling them to just pay cash when every single penny they withdraw is scrutinised and pored over is, quite frankly, ridiculous. You seem to be in the ridicule business, though.



  • Registered Users Posts: 495 ✭✭Yeah Right


    I think such a solution would in practice be far more restrictive as companies would simply remove all content potentially legally actionable, to the point there would only be a sanitized, restricted internet for everyone.

    But that's what you're proposing? It's literally your solution, restrict and sanitise all online content, for everybody, except for those willing to be supervised in government sanctioned internet cafes. Every time you open your mouth you just contradict yourself time and time again. You don't sound like you've thought this through at all.

    Post edited by Boards.ie: Paul on


  • Moderators, Politics Moderators Posts: 38,885 Mod ✭✭✭✭Seth Brundle


    I they were not properly moderated they would be removed from the whitelist.

    So who decides the moderation standards and the SLA on actioning items.

    How many strikes before a site is removed from the whitelist or is it immediate?

    How long would the likes of FB or Twitter have to moderate something put up by one of their users?

    What happens if that content is in a private page on a social media site - is it still against the whitelisting rules?

    Will the public have a right to sue if content selected for removal from the whitelist is not done in a prompt manner and who would they sue?

    In practice I think any social media giant would have to create a child friendly version to remain on the whitelist.

    Define child friendly? What age rating are you aiming for with your censorship standards?

    Any gay content allowed because many Eastern European countries would oppose this content?

    Any sexual content allowed at all? Whjat about online videos used for sex-ed or consent lessons in schools?

    Doing so would allow them to apply appropriate controls for children if commercially viable, without having to provide this same level of restriction for adults.

    But again this unrestricted content is only available in public places such as pubs and internet cafes.

    To the best of my knowledge AI tools are now pretty good at spotting harmful content,

    Well, you've already shown that you know sweet FA about technology so maybe your plan to censor the entire internet shouldn't be based on "To the best of my knowledge" 🙄

    I believe the reason they're not fully deployed on social media sites is that they are not fully accurate and will inadvertently block some harmless content, and because they might block content which is appropriate for adults but not for children.

    So are you now telling us that it doesn't work?


    You should really try and inform yourself to some simple basics and not let the mice in your head operate the controls!



  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    I'm not sure what you mean by 'parental responsibility transcends internet monitoring'? You'll have to explain that one more clearly to me.

    After that, it's not just bad parents. Parents can't watch their children 24/7. How exactly do you propose they effectively ensure their children's safety online? Or keep them from being harmed by other children through online bullying or child on child abuse?

    Even if it were actually the case that we could conveniently label what we see today as the outcome of 'bad parents', it doesn't escape the fact that the consequences of 'bad parenting' are now far worse than before because of what children can freely access and do online.



  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    I'm not suggesting people be supervised in an adult only internet space. I don't know where you got that from.

    I appreciate there are people in controlling and abusive situations, I just don't really see the scenario as very realistic where the only thing that can help them could possibly be something provided online that couldn't be delivered in an alternative manner.



  • Registered Users Posts: 539 ✭✭✭BaywatchHQ


    Around 20 years ago there was a parental lock on my family computer but I figured out how to turn it off. I remember looking at porn photos but there wasn't much video content for free like now. I think videos were behind a pay wall then. I also figured out the pin for the Sky parental lock and would look forward to 11pm on Friday when the Babestation girls would "Flash the gash".



  • Registered Users Posts: 2,016 ✭✭✭eightieschewbaccy


    Eh, the Internet is frequently used for online resources for abuse. This applies to both children and adults. Literally every abuse support org is online. But sure you know best...

    On top of that, an Internet cafe absolutely is a supervised setting. On top of that, they're a notorious location for swiping people's credentials.



  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    You're kind of arguing against yourself there tbh.

    It looks to me like you're telling me social media can never be child friendly, yet you want kids to have unrestricted access to it.

    It think the nuance of my point about AI escaped you, and my apologies if that was due to my explanation.

    We all know that AI is prone to error. For a social media site for all audiences, it would be undesirable to block, remove and maybe even flag content solely because it's potentially inappropriate for children. That would be far less an issue on a site developed for children.



  • Registered Users Posts: 23,671 ✭✭✭✭One eyed Jack



    What’s meant by that is simply that there’s more to being a parent than just monitoring their children’s internet activity, there’s all sorts of other aspects to raising children, and its not unusual that you’ve chosen to focus on just this one aspect specifically, even to the point where it has you wondering whether people who object to your ideas have children themselves! Your insistence on your idea, while admirable, is a bit like this at this point:


    For what it’s worth, I think you’ve taken an awful lot of flak on here unnecessarily, there’s no need for it. I don’t agree with your idea, I don’t support your idea, I think it’s great that you have a grande idea, and it’s silly to be nitpicking with you over the finer details that are simply beyond the scope and capacity of any one individual.

    It’s not that I don’t care about children other than my own being subjected to all sorts of abuse, violence and neglect, and I think it’s unfair for you to suggest that of other people who don’t share your ideas. That’s why first and foremost the most important factor in protecting their own children, is those children’s parents, as opposed to imagining an international EU body should be tasked with supplanting the place of parental responsibility.



  • Advertisement
  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    I don't know why they wouldn't still be available online. I was getting ahead of myself answering that trying to think of some scenario where they couldn't be.

    As for the credential swiping you could bring your own device so I don't think it would be much of an issue.



  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    When I criticise people for arguing for pushing 'parental responsibility' it's only to try and elucidate the fact that there are very obvious limits to how far parental responsibility can go.

    Even with a guaranteed child friendly internet available, it would still be the responsible thing to educate children about risks online, and how to develop a healthy relationship with the internet.

    I'm not trying to supplant parental responsibility just to make it feasible that children be brought up in a responsible manner.



  • Registered Users Posts: 2,016 ✭✭✭eightieschewbaccy


    Any public Wi-Fi connection with a personal device isn't recommended for private credentials, still at risk of swiping.


    Anyway, the reality is the average member of the public would not be happy with this kind of regression. It's a lazy but well intentioned idea. The basic premise would make private communications unviable. It would also effectively decimate the tech sector in the EU.



  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    I think credentials are considered safe on public WiFi these days unless the website you were visiting was using very outdated security.

    https://consumer.ftc.gov/articles/are-public-wi-fi-networks-safe-what-you-need-know

    There'd be no harm in using a VPN anyway.

    There's a growing acceptance that we can't continue to allow the internet to operate as it does. I don't think anyone wants the inconvenience of something like this but I suspect if we don't see any viable alternatives in the next few years something akin might be accepted as a last resort.



  • Registered Users Posts: 2,016 ✭✭✭eightieschewbaccy


    Not true, unless you're particularly security conscious, it's incredibly risky. I say this as someone who works in tech. Guessing you don't use a VPN for security for all your traffic, just to pick an example.




  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo




  • Registered Users Posts: 7,301 ✭✭✭jmcc


    Perhaps one of the biggest reasons that the whitelist idea will meet a lot of pushback at a governmental level is because the Internet generates a lot of tracking data on people. Yasha Leviine's "Surveillance Valley" is worth reading as are Bruce Schneier's books. ("Data and Goliath").

    Regards...jmcc

    Post edited by jmcc on


  • Moderators, Politics Moderators Posts: 38,885 Mod ✭✭✭✭Seth Brundle


    It looks to me like you're telling me social media can never be child friendly, yet you want kids to have unrestricted access to it.

    I didn't say I wanted kids to have unrestricted access to it. I asked you how you would tackle the issues, given that you previously said how these sites could be whitelisted. If these sites are currently a problem (and we know they are), what exactly will you do to change them once you bring in your draconian censorship measures

    It think the nuance of my point about AI escaped you, and my apologies if that was due to my explanation.

    We all know that AI is prone to error. For a social media site for all audiences, it would be undesirable to block, remove and maybe even flag content solely because it's potentially inappropriate for children. That would be far less an issue on a site developed for children.

    You are proposing to use technology despite stating "AI tools are now pretty good at spotting harmful content" and following it up with a paragraph about how it kinda doesn't work (and there could have been loads more scenarios that you could have used).


    I also note how yet again you fail to respond to the other points made, presumably because for you they are an uncomfortable truth in that your proposal is unworkable and poorly thought out



  • Registered Users Posts: 495 ✭✭Yeah Right


    The vast, vast, vast majority of supports, resources and contacts for vulnerable people are online. You may not see it as realistic, but that means SFA in the grand scheme of things. It's reality. Your refusal to accept that state-funded, government-ran internet cafes to access the 'good stuff' won't be monitored is extremely naive.

    Finally, your refusal to answer the two questions, after being called out repeatedly, speaks volumes. Before you claim you don't know what questions I'm on about, here they are again:


    I'm sure things would still be post on social media that shouldn't. If a site didn't react and take down the offensive content immediately, then they would be blocked.

    What's stopping this from happening now, as in today? What's stopping Facebook/Twitter etc. from taking down 'offensive' content immediately? Why do you need to blacklist 99% of the internet for this to happen?

    In practice, and because the EU is such a large market, I'd expect these social media sites would quickly produce a sanitized version for home audiences.

    Again, the EU is such a large market at the moment, why don't they already do this?



  • Advertisement
  • Registered Users Posts: 2,033 ✭✭✭MegamanBoo


    And those supports can stay online, possibly there's some that would be too upsetting for children, but contact details could still be listed.

    As for being spied on by the government... They could do that right now if they wanted, what makes this any different?

    I don't see any social media sites really moderating content now, I don't think it fits the business model. It would be much easier to do on a version for children.



Advertisement