Can Facebook, which is caught in the information security vulnerability scandal, break through the AI?

Facebook has been troublesome recently, and a series of scandals such as fake news, terrorism, leaking user data, etc. all come to the door, making this social software with even the largest user base a bit overwhelming. Faced with many questions from the public, Facebook's answer seems to be very simple, that is, using artificial intelligence (AI) to solve these troubles.

When you watched the two hearings last month, you will find that the company CEO Zagberg is telling reporters and lawmakers how to control the platform content in the future. The word "AI" actually appeared more than 30 times in his conversation. Times. Facebook CTO Mike Skoropf, the man responsible for turning Zuckerbock’s promise to the public into reality, once again took over the topic at a press conference to further show the company’s ability to take advantage of it. AI technology helps you get out of the current dilemma. "AI is the best way to protect the security of the community." However, some people obviously don't buy it. Some critics have pointed out that Facebook's move is confusing, making people mistakenly think that the challenge facing this company is only technical. Skoropf said that even if the company has the ability to hire manpower to check every piece of news, we will not do so. "If I tell you that every message you want to send will be checked by someone before it is released, you might consider whether you want to modify the original content, which is what we don't want to see."

Facebook's early layout of AI technology: "Photo DNA"

In fact, Facebook started using the AI ​​technology management platform as early as 2011. At the time, Facebook used a software technology called "photo DNA" to detect inappropriate content such as child pornography on the platform. According to Skoropf's statement, the algorithm of this software has been steadily improved to mark the content that the platform wants to evict. Naked and erotic images are easier to identify. Bloody and violent images, such as the image of the IS captive, are difficult to identify at first because they are pixel-by-pixel, but now this problem has been solved.

"Photo DNA" was originally an information screening software developed by Microsoft. Later, a professor named Hany Farid of Dartmouth College made further improvements and was gradually put into use. This technique calculates the hash of the image, video, and audio files and ends up with a digital signature. Similar to a human fingerprint, each signature is unique. This only needs to compare the hash value of the offending image with the hash value of the image uploaded by the platform. Once the result is matched, it can be concluded that the platform image is a duplicate of the illegal image, so that it can effectively prevent the erotic images from being more effective. Second spread. Many technology giants have adopted this technology, including Google, Twitter, Adobe and so on. Of course, the technology has two sides. This powerful technology has received a lot of praise and it has also provoked criticism for itself.

In 2014, Google used the "photo DNA" technology to detect that a user's mailbox contained a picture of a child's color, and the user also went to jail for this. Just as the surrounding people applauded and celebrated the maintenance of justice, others expressed concern about Google’s use of this technology to invade user privacy. Google responded by saying that it will only use this technology to combat child sexual abuse in the future. As for whether Google will keep its promises, we are not aware of it.

Can Facebook, which is caught in the information security vulnerability scandal, break through the AI?

One of the dilemmas of Facebook: how to correctly identify the language

Using AI to locate erotic images may be a piece of cake for Facebook, but it is much more difficult to deal with fake news, online harassment and various fake publicity activities. After all, the former is used, and the latter is required to read. Whether the ability of the machine to recognize the language can meet the demand is still a big unknown. Skoropf said that Facebook has invested a lot of manpower and resources in the past few months to solve the problem of fake advertising and fake news. Zuckerberg also told reporters that he plans to spend three years to build a better system, in order to eliminate what people do not want to see.

Although web search and automatic translation technology have made major breakthroughs, in the recognition of language scenarios and small differences, the shortcomings of various software are still very prominent and difficult to put into use. After all, AI is essentially a technology, and it seems that it is hard to compare with the human brain. In a keynote speech on Wednesday, Srinivas Narayanan, head of Facebook's AI business, used the phrase "Look at the pig!" when explaining the difficulties of AI and machine learning.

However, Facebook's algorithm does make some progress in reading. Not long ago, a company spokesperson revealed that the software deployed by Facebook last year to search for self-harm has achieved remarkable results. The first witnesses received more than 1,000 calls. In the first quarter of this year alone, the language algorithm discovered and deleted 2 million terrorist-related content for Facebook.

Schroepfer said that Facebook has improved the bullying detection software, and in the future they will be more powerful. It is reported that some specialized software will automatically generate an abusive language, and the staff will use these false language data to train the bullying detection software. The confrontation training between the two makes each other's functions more perfect, and finally receives the effect of one plus one and two.

The second dilemma of Facebook: how to overcome the multi-language working environment

Facebook's language technology works best in an English environment, not only because the company's headquarters is located in the United States, but more because the text of Facebook training technology software is basically captured directly from the Internet. Most of the Internet participants speak English. Statistics show that more than half of Facebook users are from non-English speaking countries, and the situation is very serious. For countries that rely heavily on Facebook as a social tool, the cost can be fatal.

In 2017, the Muslim ethnic cleansing incident in Rohingya occurred in Myanmar. After investigation, UN officials believed that Facebook played a role in spreading the hatred of Rohingya. Facebook responded by saying that it acknowledged that there were not many content reviewers who were good at Burmese and expressed deep apologies. It is reported that Facebook is currently carrying out a program codenamed "Muse", and in the future it will be possible to enable the company's language technology to achieve multi-language services without increasing training data. But before the program has yet to be of practical value, Facebook can only continuously collect new data to improve its ability to work in other locales.

At present, the progress of Facebook seems to remain in a very slow state. It can be seen from the fact that only this technology giant has not allocated its own language resources in countries around the world. At a meeting held on Tuesday, Facebook's product manager Tessa Lyons-Laing said that Facebook's machine learning software is learning the wrong information from fact-finding personnel, but this is based on Facebook's establishment with local fact-checking organizations. Partnerships and they are based on a wealth of data. In addition to this, Facebook has no way to deploy language technology software.

Written at the end

Schroepfer once admitted that pushing AI development on the basis of no increase in manpower has always been the main strategy of Facebook. On Wednesday, Facebook researchers showed how billions of "telegram" tags provide free data sources, which has set new records in the field of image recognition.

However, in order to solve the many problems faced by Facebook, no human judgment is absolutely impossible. When people want to pre-determine what can't be done, AI is definitely not a substitute for human location. It is just a tool, and the decision is still about its owner - that is, human beings.


Back Film For Cellphone

China leading manufacturers and suppliers of Back Film For Cellphone,Cutting Machine Material Back Film, and we are specialize in Mobile Back Film,Cutting Machine Material, etc.China leading manufacturers and suppliers of Back Film For Cellphone,Cutting Machine Material Back Film, and we are specialize in Mobile Back Film,Cutting Machine Material, etc.China leading manufacturers and suppliers of Back Film For Cellphone,Cutting Machine Material Back Film, and we are specialize in Mobile Back Film,Cutting Machine Material, etc.China leading manufacturers and suppliers of Back Film For Cellphone,Cutting Machine Material Back Film, and we are specialize in Mobile Back Film,Cutting Machine Material, etc.

Back Film For Cellphone,Cutting Machine Material Back Film,Mobile Back Film,Cutting Machine Material

Mietubl Global Supply Chain (Guangzhou) Co., Ltd. , https://www.mietublmachine.com

Posted on