Artificial intelligence that can teach -already happening



Artificial intelligence that can teach? It's already happening

Artificial intelligence could be heading to Australian classrooms — and in schools overseas, it's already there.

In Bahia, Brazil, 15-year-old students David and Roama from Colegio Perfil often start their school day at home, or on the bus.

They pick up their phones, log into the education app Geekie Lab, and begin their classes from wherever they are.

"You can access it everywhere, as long as you have your phone with you," David said.

"The worst bit is you can't really run away from homework because it says when you don't do it," Roama said.
Geekie Lab is one of several personalised education apps created by Sao Paolo start-up, Geekie.

It delivers the entire school syllabus to students in digital lessons that combine text, video and images.

"I live far away, and every time I have to carry heavy books," Roama said.

"Geekie is a more practical way, you can carry it everywhere.

"All your books and all your information you get, but in a more simple way."

The app lets each student push ahead with their lessons at their own pace.
But the software doesn't just deliver content — it quizzes students, assesses how they're performing, and passes this data on to their teacher.

Geekie's engineering manager Leonardo Carvalho said an artificial intelligence (AI) engine built into the software was constantly learning about each student's individual progress, based on these tests.

"We try to understand what are the best possible paths of learning for each student," he said.

"Collecting data is really about knowing people better."

So, for David and Roama, there are two "teachers" watching their progress: an AI program, and their English teacher Rafael.

"It's like being a facilitator," Rafael said.

"They are interacting all the time with the device, and they look for you all the time to check if you are uploading content."

'We need to change'
Secretary of the NSW Department of Education, Mark Scott, said this type of AI software could assist with education in his state's classrooms.

"We need to change the way that we teach, change the tools that we use, change the operating environment," Mr Scott said.

"Technology is a vital and exciting tool that great teachers are going to be able to use to revolutionise education in schools in the years ahead."

Many classrooms already use educational software to teach students content, or to quiz them using banks of questions.

But this newer AI-driven software could customise personalised content for each individual student.

Mr Scott said this type of software would be able to help students learn more effectively.

"One of the great mysteries for a teacher is what a child has learnt, and why a child has learnt it," he said.

"What I think you're likely to see is more low-stress assessments, where kids are regularly almost doing a check-in with the technology.

"That then allows the teacher, as a specialist, to fine-tune how the teaching and learning takes place from there.

"I see a partnership between teachers and technology.

"There are some great opportunities for us, but we're going to have to change some things.

"This year we will be creating a catalyst lab where we will be looking to target small-scale experiments, investing seed money, and actually doing these pilot programs."

A Department of Education spokesman has confirmed a "catalyst lab innovation program" will be launched in July to trial new teaching technology in classrooms.

"Ideas will focus on the theme of applied learning and are likely to range from new assessment tools and teaching resources to programs to connect schools and students," the spokesman said.

Is AI the path to personalised learning?
In recent years, Australian school students have been crashing down the world education rankings, especially in STEM subjects.

Earlier this year, the Public Education Foundation said Australia's declining performance in maths, reading and science would cost the nation $120 billion over the next 45 years.

In March, David Gonski released a new report, arguing the current mass education model was the problem.

He proposed a solution: personalised learning, which would let students work at their own pace.

Prime Minister Malcolm Turnbull has thrown his weight behind the idea.

Some believe AI software will be the key to a new personalised teaching model, while others have even argued AI machines will replace teachers within the next 10 years.

Mr Scott said he wanted teachers and students to be ready for the changes new technology could bring to classrooms.

"We know the world outside is changing dramatically," he said.

"We are in discussions with major technology companies and with business partners who want to help us create an environment where we can experiment and where we can scale."

AI teachers aren't robots standing in front of a class - yet
Back in Brazil, Mr Carvalho from Geekie said the AI software did not replace teachers.

"Teachers have a really important role in what to do with this data," he said.

"It really changes the work."

He said there were about 5 million students already using this software across Brazil.

There's huge potential to provide teachers for the millions of children in the world who don't have a teacher at all, she said.

"To provide specialist tutoring in areas that a school cannot afford," Professor Luckin said.

"We could be giving every child in the world the best tutor in the world, in a very narrow and particular way."

Professor Luckin also said if students already had access to personal devices to access the software, it could be surprisingly cost effective.

"Technology doesn't have sick days," she said.

"You might have a high outlay to start off with, but actually the maintenance is relatively low."

But she cautioned against simply replacing teachers with AI software.

"If education policy-makers and decision-makers see those systems as an economical solution to a problem of expensive teachers, teacher shortages, that's quite dangerous," she said.

"They can only teach a certain sort of thing."

It frees up teachers to do the sorts of things that teachers are really good at, Professor Luckin said, such as assisting with social skills, creative problem solving, and working in a group.

For children, she said, educational AI software could be very supportive.

"It is one-to-one tutoring, it is adaptive, in a way that it's hard for teachers who are, throughout the world, increasingly coping with much larger numbers of students," she said.

Privacy, screen time, accessibility
Professor Luckin also has concerns about where a student's individual data would be stored, and when that data would be used.

"I think there are also some enormous risks around ethics and privacy," she said.

Two common concerns are who gets to use the technology, and how much we may let it make decisions in society.

Plus many parents are already anxious about screen time for children.

However, Mr Scott said schools were already trying to help students manage their engagement with technology in the classroom, and beyond.

"We need to be careful of screen time, and I certainly don't see a future where students are just locked in front of screens," he said.

Mr Scott said NSW could be the ideal place to trial new educational technology.

"2,200 schools, 800,000 students in the government school system — it's one of the largest education systems in the world," he said.

"So if you can have an impact here, in a system this size, then you're going to have an impact on a lot of students."

This could lay the groundwork for other education systems across the country, he said

But Mr Scott is clear that teacher training is an important part of the solution too.

"We know that finally the answer is about teaching and teaching quality, and what happens in the classroom."

The Science Show on RN will be looking into AI education technology on Saturday at 12pm. You can also listen online or via the ABC Listen App.





Building “God”

“And the Lord spoke all these words: I am the Lord thy God, who brought thee out of the land of Egypt, out of the house of bondage. Thou shalt not have strange gods before me. Thou shalt not make to thyself a graven thing, nor the likeness of any thing that is in heaven above, or in the earth beneath, nor of those things that are in the waters under the earth. Thou shalt not adore them, nor serve them: I am the Lord thy God, mighty, jealous, visiting the iniquity of the fathers upon the children, unto the third and fourth generation of them that hate me.”

The magisterial words handed down to Moses take on ever greater import now that the age of artificial intelligence has arrived. What more graven thing is there, what more likeness of any thing in heaven can there be, than a superintelligent AI? We have little to no hope of even understanding a portion of such a thing, should it come into being.

Already, we have trouble understanding our narrow-domain AI. In 2017, Digital Journal reported: “An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. Researchers shut the system down when they realized the AI was no longer using English.” Google’s translation AI seems to have done something similar, and it wasn’t shut down. In 2016, New Scientist reported on the breakthrough advance: “Google’s researchers think their system achieves this breakthrough by finding a common ground whereby sentences with the same meaning are represented in similar ways regardless of language — which they say is an example of an ‘interlingua.’” “In a sense, that means it has created a new common language, albeit one that’s specific to the task of translation and not readable or usable for humans.”

The fact that humans are already being locked out of understanding how AI is working was underscored in 2016 by Guruduth Banavar, IBM chief science officer for cognitive computing. “It’s not clear even from a technical perspective that every aspect of AI algorithms can be understood by humans,” he told Fast Company magazine.

If we are already having difficulty understanding the limited AI of the present, how can we hope to understand, much less control, the increasingly intelligent AI of the near future? And should we create machine intelligence that exceeds our own, as ours exceeds that of the cockroach?

Perhaps most importantly, is it already too late?


Read full article....



AI is coming, whether Australia has the policies to deal
with it or not, report warns

Eden, a work by Australian artist Jon McCormack that incorporates artificial intelligence.

PHOTO: Australia needs "ethical" artificial intelligence policies, a major report has warned. (Supplied: Jon McCormack)
RELATED STORY: Government prepares for rise of AI, the 'petroleum of the 21st century'
RELATED STORY: Australian Defence Force invests $5 million in 'killer robots' research
RELATED STORY: These are the jobs that will still exist once the robots take over

Australians need to decide how we use artificial intelligence technologies before those decisions are made for us, a major report commissioned by chief scientist Alan Finkel has warned.

Key points:
  • The report warns Australians are using "off-the-shelf" AI, which is not necessarily designed for the Australian environment
  • It calls for a national strategy to guide the regulation and use of artificial intelligence
  • Australia's chief scientist says the Government needs to know how AI software is being used and what "behaviour we are complicit in"

A group of expert scientists, working under the Australian Council of Learned Academies, have today released a report urging the Government to develop a national strategy to guide regulation and use of emerging technology, and establish an independent AI institute.

It also noted that "inevitable" AI technology was poised to disrupt almost every fabric of Australian society and warned that it should be developed in an "effective" and "ethical" way.

The wide-ranging report examined how Australia was placed to respond to emerging technologies, including everything from 'robo-judges' in the United States sentencing low-range offenders, to automated psychologists identifying a client's subtlest expressions, or automated cameras in China publicly shaming jaywalkers.

Reporting group co-chair Professor Neil Levy, the former head of neuroethics at Victoria's Florey Institute, said AI offered great opportunity and great risk.

"We are talking about a huge contributor to the world economy ... and something with very major risks if it's not managed appropriately," Professor Levy said.

National strategy needed
Professor Levy said Australia needed to have control over its own AI systems and the data they used.

He noted China's recently introduced 'social credit score', where citizens could be rewarded or punished by a largely automated behaviour monitoring system, and that technologies like it could enter Australia if the country was not prepared to respond to their emergence.

And in the US in 2013, the Michigan Unemployment Insurance Agency (UIA) launched an automated information system aimed at detecting fraud that then recorded a 93 per cent error rate, resulting in 20,000 claimants being falsely accused of fraud.

"We have ethical standards that we want to abide by, we need to know our software, what use it's being put to, and what kinds of behaviour we are complicit in," Professor Levy said.

Professor Levy said currently Australians were buying AI "off the shelf", which was not necessarily well designed for the Australian environment.

"Overseas, law enforcement using facial recognition has tagged members of ethnic minorities as suspicious, simply because they haven't been included in the training stage of the AI program," he said.

"If we use off-the-shelf AI we risk that situation happening here."

Professor Levy said the report deliberately avoided prescribing a particular solution, it pointed to the United Kingdom as a model.

In April last year, the UK Government released a national AI strategy and established several new bodies to support the development of AI, including an Office for Artificial Intelligence.

Australians need to be confident their information is secure
An illustration shows a robot wearing a phone headset alongside a police officer and a construction worker.

PHOTO: Researchers suggest artificial intelligence will affect every job in Australia — including office workers, police and construction workers. (ABC News: Ben Spraggon)

A key finding of the report was that Australians would benefit from an 'ethical certificate' on consumer technology, similar to the food standards label.

The chief scientist has previously proposed a "Turing certificate" for smartphones, smart home devices and other technologies that collect people's data, that would require a minimum standard that promised not to surreptitiously collect or use a person's data.

Dr Finkel said the Turing certificate was already under "active consideration" by Standards Australia.

Professor Levy said for Australia to become a leader in artificial intelligence technology, the public needed to have confidence that their information was secure.

Opportunity to be leader in AI: report
Dr Finkel said it was up to Government how it chose to respond to the report, but said it needed to begin asking, "What kind of society do we want to be, and how do we ensure that most effectively?"

"I would hope that government departments ... will use this report as if it were a reference manual," he said.

But Professor Levy said Australia was well-positioned to be a local leader in artificial intelligence.

"We do need to be realistic, we can't compete everywhere," Professor Levy said.

"But it's realistic to say we have opportunities to be excellent in some things, we have proven that time and time again."

ABC News