Google executives are aware that Bard, the company’s artificial intelligence search engine, doesn’t always provide accurate results. Employees are responsible for fixing at least some of the incorrect responses.
In a Wednesday email to staff members, Google’s vice president for search Prabhakar Raghavan requested assistance in ensuring that the company’s new ChatGPT competitor receives accurate responses. The email, which CNBC viewed, contained a link to a page with dos and don’ts outlining how staff members should correct answers as they test Bard internally.
On subjects they are knowledgeable about, staff members are urged to revise their responses.
“Bard learns best by example, so taking the time to rewrite a response thoughtfully will go a long way in helping us to improve the mode,” the document says.
As previously reported by CNBC, CEO Sundar Pichai asked employees to devote two to four hours of their time to Bard on Wednesday, acknowledging that “this will be a long journey for everyone, across the field.”
Raghavan shared the same sentiment.
“This is exciting technology but still in its early days,” Raghavan wrote. “We feel a great responsibility to get it right, and your participation in the dogfood will help accelerate the model’s training and test its load capacity (Not to mention, trying out Bard is actually quite fun!).”
Google announced its conversation technology last week, but a series of blunders in the run-up to the announcement drove the stock price down nearly 9%. Employees chastised Pichai for the blunders, calling the rollout “rushed,” “botched,” and “comically short-sighted.”
Company leaders are relying on human knowledge to clean up the AI’s mistakes. Google provides guidance for what to consider “before teaching Bard” at the top of the do’s and don’ts section.
Google tells employees to keep their responses “polite, casual, and approachable.” It also states that they should be written in “first person” and in a “unbiased, neutral tone.”
Don’ts include stereotyping and “avoid making assumptions based on race, nationality, gender, age, religion, sexual orientation, political ideology, location, or similar categories,” according to company policy.
The document also warns against “describing Bard as a person, implying emotion, or claiming to have human-like experiences.”
Google then instructs staff to “keep it safe” and mark as inappropriate any responses that offer “legal, medical, or financial advice” or that are vile and abusive.
“Don’t try to re-write it; our team will take it from there,” the document says.
Raghavan said contributors will earn a “Moma badge,” which appears on internal employee profiles, to incentivize people in his organization to test Bard and provide feedback. He stated that Google will invite the top ten rewrite contributors from Raghavan’s Knowledge and Information organization to a listening session. There, they can “share their feedback live” with Raghavan and the Bard team.
“A wholehearted thank you to the teams working hard on this behind the scenes,” Raghavan wrote.
There were no comments on the issue from Google.
(Adapted from CNBC.com)
Categories: Economy & Finance, HR & Organization, Strategy, Sustainability
Leave a Reply