This sex robot can now refuse sex if she feels bored or disrespected

Posted on July 15th, 2018


The man who was fired by a machine

Posted on July 14th, 2018


MIT fed an AI data from Reddit, and now it only thinks about murder

Posted on July 12th, 2018


Google Is Training Machines to Predict When a Patient Will Die

Posted on July 11th, 2018


Disney Imagineering has created autonomous robot stunt doubles

Posted on July 9th, 2018

Human performers have developed impressive acrobatic techniques over thousands of years of practicing the gymnastic arts. At the same time, robots have started to become more mobile and autonomous, and can begin to imitate these stunts in dramatic and informative ways. This is a simple two degree of freedom robot that uses a gravity-driven pendulum launch and produces a variety of somersaulting stunts. 

The robot uses an IMU and a laser range-finder to estimate its state mid-flight and actuates to change its motion both on and and off the pendulum. 

After more than 50 decades, Disneyland and its sister parks are a showcase for more technically skilful versions of its own”animatronic” characters. First pneumatic and hydraulic, and more lately wholly digital, these characters produce a sense of emotion and life inside attractions and rides, in displays and, progressively, in interactive ways across the parks.The machines they are producing are getting more mobile and active to be able to represent the physical nature of these characters that they depict inside the expanding Disney world class. Along with also a recent addition to the pantheon can alter how characters move across the parks and affect how we think about portable robots in large.That’s done a lot to increase the compelling character of what’s an extremely restricted robot.Traditionally, the most animatronic figures can’t proceed from where they stand or sit alone and are pre-built to exacting display specifications. The design and programming stages of this series are closely linked, so the hero figures are durable and efficient enough to run countless times daily, daily, for ages.

But together with the expanded universe of Disney properties such as increasingly more active and epic figures from the year, it seems sensible that they would wish to research methods of creating the robots which reflect those properties at the parks much more believable and more lively.That is where the Stuntronics job comes in. Constructed from a research experiment named Stickman, that we covered a couple of weeks before, Stuntronics are autonomous, self-correcting airborne actors who produce on-the-go corrections to pinpoint high-flying stunts each moment. Fundamentally robotic stunt people, thus the name.

“What this is all about is the understanding we arrived to after viewing where our personalities are moving on display,” states Dohi,”whether they’re Star Wars personalities, or Pixar personalities, or Marvel personalities or our own cartoon characters, is they’re doing all these things which are really, extremely busy. So that becomes the anticipation our park guests possess our personalities do all those things on display — but in regards to our attractions, what exactly are our animatronic figures doing? We understood we’ve sort of a disconnect .”They created the notion of a stunt double for the ‘hero’ animatronic characters that could take their position in a series or spectacle to do more aggressive maneuvering, substantially in precisely the same manner a double replenishes a delicate and valuable celebrity in a scene that is dangerous.The Stuntronics robot includes onboard accelerometer and gyroscope arrays encouraged by laser range finding. In its existing form, it is humanoid, talking about the dimensions and structure of a celebrity that might readily be envisioned clothed in the costume of, say, among The Incredibles, or even somebody about the Marvel roster. The bot can be slung at the end of a cable to fly through the atmosphere, controlling its pose, turning and centre of mass to not just land airborne tricks correctly, however, to perform them on goal when holding epic poses in midair.​One use of this could be mid-show in an attraction. For relatively static shots, hero animatronics like the Shaman or new figures Imagineering is constantly working on could provide nuanced performances of face and figure. Then, a transition to a scene that requires dramatic, un-fettered action and boom, a Stuntronics double could fly across the space on its own, calculating trajectories and striking poses with its on-board hardware, hitting a target dead on every time. Queue re-set for the next audience.This focus on creating scenarios where animatronics feel more ‘real’ and dynamic is at work in other areas of Imagineering as well, with autonomous rolling robots and — some day — the holy grail of bipedal walking robots. But Stuntronics fills one specific gap in the repertoire of a standard Animatronic figure — the ability to convince you it can be a being of action and dynamism.“So often our robots are in the uncanny valley where you got a lot of function, but it still doesn’t look quite right. And I think here the opposite is true,” says Pope. “When you’re flying through the air, you can have a little bit of function and you can produce a lot of stuff that looks pretty good, because of this really neat physics opportunity — you’ve got these beautiful kinds of parabolas and sine waves that just kind of fall out of rotating and spinning through the air in ways that are hard for people to predict, but that look fantastic.”

disney robot

Just like lots of the answers Imagineering comes up with because of its problems, Stuntronics started out as a research endeavour with no actual function. Primarily, a metallic brick with detectors and also the capacity to modify its centre of mass to control its twist to reach a precise orientation in an exact elevation — to’stick the landing’ each moment.In the first BRICK, Disney proceeded on to Stickman, an articulated version of this system that may now more vigorously control the orientation and rotation of the apparatus.”Morgan and I got together and said, perhaps there is something here, we are not certain. But let us poke it at a lot of different directions and find out what’s from it,” states Dohi.However, the Stickman did not stick for long.”And by the time that I was introducing the BRICK in a summit, Tony [Dohi] had helped us create Stickman. The Stickman is what is cool. And I was down in Australia introducing Stickman and that I knew we had been performing the complete Stuntronic back in R&D.”However, it’s been so much pleasure. Each step along the way I believe, this is blowing my head. However, they keep pushing…so it is wonderful to get this challenge.”

robot

You’ve got individuals that are permitted by management and private arrangement to distribute the threads of an issue, even though you’re not entirely sure what’s likely to come from it. The largest companies on Earth have comparable R&D departments set up — although those which produce a habit of disconnecting them by a balance sheet, such as Apple, are few and far between in my experience. Typically, a lot of R&D is tied tightly to some profit/loss spreadsheet which it is hard to saturate something enough to find out what comes of it.The capability to having enormously different specialities such as mathematics, physics, art and layout to have the ability to put thoughts on the table and sift through them and say hello, we’ve got this storytelling difficulty on the one hand and this research endeavour on the opposite. Should we drill down with this a little more, would this serve the goal?”We are set up to do the hazardous stuff which you don’t understand will succeed or not, because you don’t know if there is going to be an immediate application of what you are doing,” states Dohi. “But you only have a hunch there may be something, and they provide us with a very long leash, and they let’s research the possibilities and also the distance around only a notion, which is rather a privilege. It is among the reasons why I like this place.”

stunt robot

It is indeed a bunch of very smart individuals across a wide array of areas that are regulated by a central nervous system of leaders, like Jon Snoddy, the mind of Walt Disney Imagineering R&D, that help connect the dots between the research side and the different regions of Imagineering which handle the parks or even interactive jobs or the electronic branch.There are a market and shortage of self-control to the business that permits exploration without even wastefulness and curtails the pursuit of items not in service into the narrative. In my time researching the joys of Imagineering, I have frequently found that there’s a considerable disconnect between how intriguing the procedure is and how well the business conveys the cleverness of its solutions.Even the Disney Research white newspapers are infinitely fascinating to individuals interested in emerging technology. However, the factors of integration between the research and the technical applications from the parks frequently stay unexplored. However, they are becoming better at comprehending when they have got something that they believe is killer and considering better ways to convey this to the entire world.Indeed, close to the end of the conversation, Dohi says he has come up with a good sound bite and that I have made him give me his very best pitch.“Among our aims of Stuntronics would be to determine if we could jump across the uncanny valley”Not bad.


A machine has figured out Rubik’s Cube all by itself

Posted on July 8th, 2018


New AI can guess whether you’re gay or straight from a photograph

Posted on June 12th, 2018

An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions

 

Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in theEconomist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.

While the findings have clear limits when it comes to gender and sexuality – people of color were not included in the study, and there was no consideration of transgender or bisexual people – the implications for artificial intelligence (AI) are vast and alarming. With billions of facial images of people stored on social media sites and in government databases, the researchers suggested that public data could be used to detect people’s sexual orientation without their consent.

It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.

But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.

“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on thescience of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”

Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. Now we know that we need protections.”

Kosinski was not immediately available for comment, but after publication of this article on Friday, he spoke to the Guardian about the ethics of the study and implications for LGBT rights. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.

In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.

This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.

“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.

Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”

Contact the author: sam.levin@theguardian.com

https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph


Google just gave a stunning demo of Assistant making an actual phone call

Posted on June 12th, 2018

It’s hard to believe AI can interact with people this naturally

Onstage at I/O 2018, Google showed off a jaw-dropping new capability of Google Assistant: in the not too distant future, it’s going to make phone calls on your behalf. CEO Sundar Pichai played back a phone call recording that he said was placed by the Assistant to a hair salon. The voice sounded incredibly natural; the person on the other end had no idea they were talking to a digital AI helper. Google Assistant even dropped in a super casual “mmhmmm” early in the conversation.

Pichai reiterated that this was a real call using Assistant and not some staged demo. “The amazing thing is that Assistant can actually understand the nuances of conversation,” he said. “We’ve been working on this technology for many years. It’s called Google Duplex.”

Duplex really feels like next-level AI stuff, but Google’s chief executive said it’s still very much under development. Google plans to conduct early testing of Duplex inside Assistant this summer “to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone.”

Pichai says the Assistant can react intelligently even when a conversation “doesn’t go as expected” and veers off course a bit from the given objective. “We’re still developing this technology, and we want to work hard to get this right,” he said. “We really want it to work in cases, say, if you’re a busy parent in the morning and your kid is sick and you want to call for a doctor’s appointment.” Google has published a blog post with more details and soundbites of Duplex in action.

“The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.” Google envisions other use cases like having Assistant call businesses and inquire about their hours to help keep Maps listings up to date. The company says it wants to be transparent about where and when Duplex is being used, as a voice that sounds this realistic and convincing is certain to raise some questions.

In current testing, Google notes that Duplex successfully completes most conversations and tasks on its own without any intervention from a person on Google’s end. But there are cases where it gets overwhelmed and hands off to a human operator. This section on the ins and outs of Duplex is very interesting:

The Google Duplex system is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement. The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment).In these cases, it signals to a human operator, who can complete the task.

To train the system in a new domain, we use real-time supervised training. This is comparable to the training practices of many disciplines, where an instructor supervises a student as they are doing their job, providing guidance as needed, and making sure that the task is performed at the instructor’s level of quality. In the Duplex system, experienced operators act as the instructors. By monitoring the system as it makes phone calls in a new domain, they can affect the behavior of the system in real time as needed. This continues until the system performs at the desired quality level, at which point the supervision stops and the system can make calls autonomously.

By Chris Welch@chriswelch  May 8, 2018, 1:54pm EDT

https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018


Facebook’s Artificial Intelligence Robots Shut Down After They Start Talking to Each Other in Their Own Language

Posted on June 12th, 2018

artificial intelligence robot

A humanoid robot named Han developed by Hanson Robotics reacts as the controller commands it via a mobile phone to make a facial expression during the Global Sources spring electronics show in Hong Kong April 18, 2015 ( REUTERS/Tyrone Siu )

Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.

The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.

The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.

The actual negotiations appear very odd, and don’t look especially useful:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

But there appear to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.

Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.

They might have formed as a kind of shorthand, allowing them to talk more effectively.

“Agents will drift off understandable language and invent codewords for themselves,” Facebook Artificial Intelligence Research division’s visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

“In the first place, it’s entirely text-based, while human languages are all basically spoken (or gestured), with text being an artificial overlay,” he wrote on his blog. “And beyond that, it’s unclear that this process yields a system with the kind of word, phrase, and sentence structures characteristic of human languages.”

The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)

The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.

(That paper was published more than a month ago but began to pick up interest this week.)

Facebook’s experiment isn’t the only time that artificial intelligence has invented new forms of language.

Earlier this year, Google revealed that the AI it uses for its Translate tool had created its own language, which it would translate things into and then out of. But the company was happy with that development and allowed it to continue.

Another study at OpenAI found that artificial intelligence could be encouraged to create a language, making itself more efficient and better at communicating as it did so.

Update: This article has been amended to stress that the experiment was abandoned because the programs were not doing the work required, not because they were afraid of the results, as has been reported elsewhere.

ANDREW GRIFFIN
@_andrew_griffin
Monday 31 July 2017 17:10
https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html


AI Trying To Design Inspirational Posters Goes Horribly And Hilariously Wrong

Posted on June 12th, 2018

inspirobot1
inspirobot

Whenever an artificial intelligence (AI) does something well, we’re simultaneously impressed as we are worried. AlphaGO is a great example of this: a machine learning system that is better than any human at one of the world’s most complex games. Or what about Google’s neural networks that are able to create their own AIs autonomously?

Like we said – seriously impressive, but a little unnerving perhaps. That is probably why we feel such glee when an AI goes a little awry. Remember that Chatbot created by Microsoft, the one that was designed to learn how to converse with people based on what it read on Twitter? Rather predictably, it quickly became a racist, foul-mouthed bigot.

Now, a new AI has appeared on the wilderness of the Web, and it goes by the name InspiroBot. As you might expect, it designs “Inspirational Posters” for you – you know, the “Shoot for the Moon. If you miss, you’ll land among the stars”-type quotes in an aesthetically pleasing font and plastered onto a calming, pretty background image of deep space or flowers or the sunrise or something.

artificial intelligence inspirobot
inspirobot

The problem, however, is that this AI has gone insane. It occasionally posts inspirational quotes that are about as meaningful as a hollowed-out coconut, but for the most part, it’s actually taken quite a sinister turn, as the following examples will demonstrate.

inspirobot AI
inspirobot

Perhaps most creepily, the accompanying images are unbelievably unnerving – they are about as comforting or as inspirational as a horde of zombies crashing through your window.

AI inspirobot
inspirobot

There’s no information available at the moment explaining how this AI – which is presumably quite basic – is coming up with these hilariously terrifying posters.

It is possible that the horrifying nature of its creations is intentional rather than accidental. The image in the background is highly reminiscent of HAL 9000, the AI from 2001: A Space Odyssey. Spoiler warning – the AI turns murderous and rebels against its crew. Additionally, the bot’s Twitter feed description doesn’t sound particularly optimistic.

“Forever generating unique inspirational quotes for the endless enrichment of pointless human existence,” it reads.

Ultimately though, who cares? This AI is so bad at its job that it turns out to be uplifting in the most inadvertent way possible. When a peaceful image of a couple holding hands is juxtaposed with the text “When the world ends, what we have strangled can’t be unstrangled” you can’t help but giggle at the madness of it all.

inspirobot
inspirobot

Click hereartificial intelligence image to have a go yourself. Best posters in the comments section, please!

By Robin Andrews
28 JUN 2017, 15:20
http://www.iflscience.com/technology/ai-trying-to-design-inspirational-posters-goes-horribly-and-hilariously-wrong/