GearriceGearrice
  • Tech World
  • Smart Home
  • Mobile Zone
  • 5G
  • Alexa
  • Amazon
  • AMD
  • Android
  • Apple
  • AirPods
  • AirTag
  • Apple Studio
  • Apple TV
  • Apple Watch
  • HomePod
  • iOS
  • iOS 15.4
  • iPad
  • iPhone
  • Mac
  • iMac
  • MacBook
  • Apps
  • Asus
  • Bitcoin
  • Cars
  • ChatGPT
  • Computer
  • Keyboard
  • Contact us
  • Disney
  • Display
  • Electric
  • Elon Musk
  • Gaming
  • Google
  • Chromecast
  • Google Maps
  • HBO
  • How to
  • Huawei
  • HONOR
  • Instagram
  • Intel
  • Internet
  • iQOO
  • Laptop
  • Lenovo
  • LG
  • Meta
  • Facebook
  • Galaxy
  • Metaverse
  • Microsoft
  • Windows
  • Motorola
  • Movies
  • Movistar
  • MWC Barcelona 2022
  • Netflix
  • News
  • Nintendo
  • Nokia
  • Nvidia
  • OPPO
  • OnePlus
  • Realme
  • Orange
  • Oscars
  • Philips
  • PlayStation
  • Pokémon
  • Qualcomm
  • Snapdragon
  • Samsung
  • Solar
  • Sony
  • SpaceX
  • Spotify
  • Tablet
  • Tesla
  • TikTok
  • Tips and Tricks
  • Today
  • Twitch
  • Twitter
  • Vivo
  • VPN
  • WhatsApp
  • Write For Us
  • MIUI
  • POCO
  • Redmi
  • Mouse
  • OLED
  • Prime
  • Scooter
  • Xbox
  • Xiaomi
  • YouTube
Facebook Twitter Instagram
Facebook Twitter Instagram Pinterest
Gearrice Gearrice
Subscribe
  • Tech World
  • Best Deals
  • Gaming
  • Mobile Zone
    • Android
    • Apple
  • Smart Home
GearriceGearrice
Home»Tech World»proposes these measures to avoid “present and future damage”

proposes these measures to avoid “present and future damage”

By Adam Straker06/06/20234 Mins Read
Share
Facebook Twitter LinkedIn Pinterest

About the letters and manifestos from Silicon Valley representatives positioning itself on the dangers and measures to adopt around the artificial intelligence it is an exciting subject. As soon as some draw a letter, another group of CEOs and experts come together to take the matter down, they wait a few weeks… and then they take out their own public positioning…not so different from before.

A week ago, without going any further, the non-profit Center for AI Safety published its “AI Risk Statement” of only 22 words (in English), which we translate as follows:

“Mitigate the risk of extinction from AI should be a global priority, along with other societal risks like pandemics and nuclear war.”

Among the signatories of such a brief message were CEOs of several of the leading companies in AI, including Sam Altman himself, CEO of OpenAIwhich many considered the recipient of the criticism of the letter (with a similar objective to this one, although more extensive) published weeks ago by Musk.

But the driving entity (which defines its mission as “reducing risks on a societal scale associated with AI”) seems to have decided that conciseness did not conflict with digging a little deeper in your proposals.

Thus, they have just presented a new, more extensive document of ‘existing political proposals aimed at present and future damages’. “The goal of this one-page document is to outline three proposals that we believe advance AI security. This list is not exhaustive“.

There is, yes, a big difference with respect to ‘Musk’s letter’: the need for moratoriums on the development of advanced AIs is not mentioned

ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS

Contents hide
1 I. Legal liability for damage caused by AI
2 II. Increased regulatory scrutiny
3 III. Human supervision of automated systems
4 In summary…

I. Legal liability for damage caused by AI

The first theme addressed in the text is the need to establish enhanced legal accountability frameworks for damage caused by AI systems. The AI ​​Now Institute is quoted as arguing that allowing General Purpose AI (GPAI) developers to evade liability through standard legal exemptions would be the wrong approach…

…because it would create a dangerous lagoon that would favor large companies with many resourcesto place all the responsibility on ‘downstream actors’ who lack the resources, access and capacity to mitigate all risks.

II. Increased regulatory scrutiny

The second prominent theme focuses on the need for greater regulatory scrutiny directed at the development of AI systems, and that spans the entire product life cycle, not just the application layer: the companies that develop these models should be accountable for the data and design decisions they make.

According to the document, transparency and regulations targeting training data can help combat algorithmic bias and prevent companies from obtaining benefits of copyrighted materials without compensating its creators.

The CEO of OpenAI already has the bunker prepared while his company plans how to deal with the creation of a 'strong AI'

III. Human supervision of automated systems

The third theme highlighted in the paper is the importance of human oversight in the implementation of high-risk AI systems. Human supervision can help mitigate possible concerns regarding both bias and the spread of misinformation and detection (and timely deactivation) of “dangerous” AI systems.

Express reference is made to the European Union’s regulatory proposal, positively assessing the emphasis it places on the need for human beings to be able to intervene and annul a decision or recommendation that could cause potential harm.

In summary…

Yet another attempt by Altman & Co. to get ahead and ‘regulate before they regulate’, telling lawmakers how best to tie the industry’s hands. We already spoke in the chronicle that we did of Altman’s intervention before the US Senate about the problems that this potential regulation to dictation has. That being said, ‘the devil is in the details’and the brevity of the document does not allow us to appreciate them for now.

In Genbeta |

Related Posts

Problems with Movistar TV? This is how to reset the factory deco

VPNs are essential for safer browsing. But I also use them for all this

If the back glass of iPhone 15 breaks, how much will it cost?

Add A Comment

Leave A Reply Cancel Reply

Tech World

Optimising Data Center Efficiency with 42U Rack and Modular UPS Solutions

By gearrice26/09/20230

Problems with Movistar TV? This is how to reset the factory deco

27/09/2023

VPNs are essential for safer browsing. But I also use them for all this

27/09/2023

If the back glass of iPhone 15 breaks, how much will it cost?

27/09/2023

new features and compatible Macs

27/09/2023
Gearrice
Facebook Twitter Instagram Pinterest
  • Privacy Policy
  • Terms and Conditions
  • Write For Us
© 2023 Gearrice.

Type above and press Enter to search. Press Esc to cancel.