Cyber Security - Essay Prowess

Cyber Security

  
Research Project Proposal Initial Template NAME: Saud Buhyaliq Althubyani   Module: MSc Cyber Security

Project Title [up to 150 chars]

Malicious use of Artificial Intelligence in Cyber security

Start Date and Duration

a. Proposed start date18/05/2020b. Duration of the grant (months)4 months

Applicants

RoleNameOrganisationDivision or DepartmentHow many hours a week will the investigator work on the project?
Principal InvestigatorSaud AlthubyaniUniversity of PlymouthSchool of  Computing & Mathematics72 hours
Researcher-Co-InvestigatorProf.Steve FurnellUniversity of PlymouthSchool of  Computing & Mathematics 

1. Summary

Describe the proposed research in simple terms in a way that could be publicised to a general audience

Malicious use of Artificial Intelligence in Cyber security The proposed research is on the malicious use of artificial intelligence in cyber-security. Artificial intelligence (AI) is the creation of systems that can perform intelligent tasks using digital technology (Russell and Norvig, 2016).  Artificial intelligence can be used to compromise the security of computer systems. Thus the term “malicious” refers to the practices that are deliberate and which are intended to compromise the safety of the computer systems. The compromise can affect a group, an institution, or an individual. Artificial intelligence is learning that is exhibited by machines by taking in what is in its environment and acts on it to ensure success in solving a problem. According to Chen and Leung (2018), AI is varied and has capabilities like natural language processing, reasoning technology, human interface technology, and narrative generation. In recent years, AI, as well as the use of robots, have increased and transformed the lives of people and technology use. For instance, organizations have been using the cognitive abilities of AI to improve their services to their clients. This is possible through making a system to learn expertise from a professional and availing the same service to clients more accurately than human beings can (Gudivada et al., 2016). For instance, in the use of driverless cars, the information on manoeuvring a car is fed into a cognitive system of AI, then the system is used to provide driver services to clients. This service is being used to help clients that cannot be licensed like the old, disabled, and the underage and those who do not wish to drive for their own reasons. Over time this intelligence is perfected and becomes more accurate than the human experts in that field. According to Gudivada et al. (2016), artificial intelligence has the ability to learn the collective fields of humankind and thus use digital sensor data in the development of teachers and advisors. Although artificial intelligence is highly rated in intelligence and in solving contemporary digital challenges, there are risks to its use, which are based on the algorithm use, privacy, ethics and liability and operating systems (Ramos et al., 2008). Although artificial intelligence has been used to mostly benefit digital applications like the use of search engines and in driverless cars, AI can be used in malicious ways. This study intends to discuss the various ways in which artificial intelligence can be used in malicious ways so that there can be solutions in the mitigation of such risks.  The use of AI maliciously impacts on how the digital infrastructure is designed. This study, therefore, will focus on the identification of malicious use of artificial intelligence, how it is harmful, and ways in which the risk can be mitigated. The scope of the study is on AI technologies that have been in use for the last five years. This study will thus focus on the threats and risks that are inherent with the use of artificial intelligence. It will also investigate if the same AI can be used to identify such risks and mitigate them. The study will also identify the area in which AI is used maliciously through the classification of AI-based cybersecurity challenges. Thus the focus of the study will be on the institutions or risks that an individual faces in the use of AI and which compromise the system being used.  The relevance of the study is on the implications of AI and what can be done to mitigate deliberate malicious attacks due to AI. Unintentional forms of misuse of AI are left out of consideration as algorithmic bias, and indirect threats are not deliberate. In the literature review, the studies that will be highlighted have an overview of surveys done on the topic. As the papers are varied, the focus will thus be on the cautionary papers which claim that AI systems can be easily corrupted.  The source of the scholarly papers is survey articles that are peer-reviewed and which lead to other sources.  

2. Objectives

List the main objectives of the proposed research in order of priority (maximum ½ side of A4)

  This paper will seek to achieve the following critical objectives: An evaluation of the malicious use of artificial intelligence in cyber security in its various forms.To evaluate the impact of the malicious use of AI on internet users in Plymouth University.An assessment of technological readiness, of Plymouth University internet users and the impact on AI malicious uses.An appraisal of the possible ways of prevention of AI malicious attacks.An examination of mitigation techniques of malicious artificial intelligence attacks.

4. Proposed Research and its Context

Describe the proposed research in technical terms and include a minimum of 4 references at the end of this section

Background Artificial intelligence has no ethical bias since it relies on the designer to ensure that good ethics are followed. The technology can thus be used for malicious design or those intended for good purposes. Thus the dangers of AI arise when it is used to undermine cybersecurity.  According to Gasser and Huhns (2014), machine learning should be taken into account since algorithms of machines are able to learn and change to be sophisticated, just like an AI system. Machine learning feeds an algorithm with a large number of solutions to problems. For instance, for an AI system to identify an item, the machine learning would be fed with algorithms of the same item. The algorithm is what determines what the items have in common and thus identify the item in question.  Thus the defence systems that fight cyber-attacks use machine learning to mitigate such attacks. It is evident that the malicious use of AI takes place through the use of machine learning (Nilsson, 2014).   There is no perfect security solution, but there are ways to minimize the risk of cyber threats. Since there are way more cyber-attacks in recent times, there is a need to prevent attacks before they even happen. According to Dilek et al. (2015), a report done by the UK gives recommendations for the mitigation of threats caused by AI. One is that policymakers and researchers in cybersecurity should work together in the presentation of malicious use of AI. Secondly is that best practices can be realized from experience in handling risks. Thirdly is that AI is a technology that is dual; thus, the researchers should be aware of both sides and take advantage of the situation. Lastly, stakeholders that engage with the mitigation of malicious risks should be expanded in terms of variety so that the mitigation is also expansive. Cybersecurity has been compromised in recent times, owing to the dynamic development of malware. Malicious users of malware are continuously finding new methods to hack security systems (Brundage et al., 2018). One of the new ways is through the use of artificial intelligence.  Companies like Darktrace were among the first to discover that AI has been being used to cause machine learning (Veiga, 2018). The system learned the tendencies of the network that hosted it, and after collecting information, it started copying the same behaviours; thus, it was difficult to identify and mitigate. It is thus prudent to learn the threat that AI causes to the system. The ability of AI to learn is what makes it a formidable threat to cybersecurity as it can easily fool defence mechanisms that are typical.      CHAPTER 2: LITERATURE REVIEW 2.0 Threats User error- this is a threat that is created by internet users who use it maliciously to become hackers. For instance, through phishing, an email can be used to collect information from a user who may have no clue that clicking on a link may be harmful if it is malicious. With AI, there is evidence of a higher rate of phishing as the system copies the trends of emails that it comes across. The usual measures used for cybersecurity threats identify an irregularity in the codes then neutralizes it. However, with AI, the threat is high as the code can be hidden in a system without being detected for a long time; thus, such an attack may be detrimental to a system. Since AI’s efficiency lies in learning patterns, it is able to find errors in code, and thus it can detect malware rather fast, and better than humans can. However, AI can be used against itself when a hacker changes the data it is learning form or changes the algorithms. Thus AI here will be learning to ignore the problems that exist. In cases where AI is used to signal many attacks, the system can shut down in the protection of data. However, machine learning development is highly used for malware detection, but rarely has it been able to protect itself from hackers that use AI maliciously.    2.1 Chatbot  The Chatbot has, in recent times, been used for malicious AI attacks. The Chatbot refers to an application that conducts an on-line conversation through speech or text. It is used in place of human conversation and even simulates humans in the way that they speak (Nilsson, 2014). Chatbots require tuning and frequently testing so that they converse with human beings effectively. The ability of AI to allow such conversations is through learning how human beings converse and following their speech patterns. The Chatbot makes it possible to relate with human beings to deliver services like customer service, questionnaires, and in routing requests form clients. AI has, therefore, allowed such systems to process natural language ad use various word classifications (Khanna et al., 2015).  Khanna et al. (2015) point out that Chatbots have, in recent times, been used maliciously. For instance, a Chatbot in 2016 was used to trick Facebook users into installing malware that was used to get to user’s information remotely. Since Chatbots are used on website popups, messaging apps like Facebook and Google assistants, they offer accuracy that is not possible to achieve with humans. This is important in the achievement of customer satisfaction but can also be used maliciously. For instance, a conversation with a Chatbot on Facebook or ban Chatbot will reveal personal information as one may provide the address, financial details, and phone number through which malicious users can steal information. According to Makar (2014), amazon’s Alexa and Google’s Assistant are the most advanced Chatbots and are widely used at home. They are used for commands and inquiries and have been widely accepted as reliable Chatbots. For this reason, they provide a much higher risk of malicious use as they are not secure, which may make them reveal important data about their users.    2.2 De-Anonymization De-anonymization refers to the process of data mining, where anonymous data is used with other data sources to identify anonymous data. The information that is realized is used to distinguish data sources from each other (Gambs et al., 2014). For instance, multiple data points can be used to determine the identity of a person and use it with other information to correctly identify a person. According to Gambs et al. (2014), De-anonymization can be used maliciously to identify users and use their information maliciously. For instance, in 2006, hackers were able to identify Netflix users and their information that was supposed to be private (Narayanan and Shmatikov, 2006). Thus the malicious use of the process is possible and will have debilitating effects, especially in research. AI can be used for various de-anonymization forms due to its ability to learn a data set and its relation to other data.     3.0 Programme and Methodology Introduction  According to Floridi et al. (2018), a well-designed research framework in cybersecurity is important in meeting the objectives of the study. The chapter thus will highlight the research methods, design, data collection, and analysis of the data in finding out the best methods in mitigation of the malicious cyber-attacks. Brundage et al. (2018), suggest that a research framework in this field should be comprehensive so as to find possible solutions to the problem of malicious use of AI. This is highly problematic as the field of cybersecurity is highly dynamic. This methodology will thus provide a discussion of the evaluation, validity, and reliability of the research.    3.1 Research Approach This research will adopt a deductive approach where a given theory is tested if it is valid. The time available and the research questions available for the study make it appropriate for survey questionnaires. This is because they allow the researcher to do an in-depth investigation of the problem by asking people if they are aware of the malicious use of Artificial intelligence. All of the respondents who will answer the given questions will have to use artificial intelligence and thus will have experience with the malicious use of AI. Survey questionnaires are easy to use and are also less costly in use with a large sample. According to Tyagi (2016), questionnaires are efficient as the researcher intends to collect ted at form a large number of people. The sample will be using 50 respondents from Plymouth University who have used machines and who have experience with AI. Such  respondents will be students that use the internet, the IT department in the university and  the staff of Plymouth University. Distribution of the questionnaires can be done through an email or through a site so that the computer users can have easy access to it. The quantitative data that is collected must focus on the number of respondents that are aware of the malicious use of AI while taking into account the individuals that have actually had an encounter with a malicious attacker. The data collected will also identify the connection between the data collected to the existing theory that artificial intelligence can be used maliciously (Tyagi, 2016). The survey intends to investigate the threat that malicious use of AI causes to mitigate such attacks in the future. A quantitative survey is easy to interpret; thus it will also be easy to get informed conclusions and deductions from it. The easiness in administration also means that a lot of information can be collected in a short time (Chen et al., 2004).    3.2 Research Design According to Bezobrazov et al., (2016), the research design is the strategy taken by the researcher and which involves the use of different elements to solve a research problem. Ideally, the design shows the manner in which the data will be collected and how it will be analyzed in the end. The objectives of the study are thus reachable through quantitative analysis in this case. In this case, identifying the number of participants that are aware of risks of malicious use of AI identifies the extent of the problem and also identifies the level of awareness of internet users of the risk they are exposed to. This is relevant to this study as it reveals what users are aware of and identify if they have experienced a malicious use of malicious AI. Sengupta, (2017), adds that explaining the purpose of the study is imperative once the sample is identified, in this case, the participants that are aware of AI and have used it.    3.3 Research Methods At times it is important to conduct a pilot study before the data is collected. This means that one or two respondents are exposed to the survey questionnaire to find out how long it would take to take the survey. It will also bring out other weaknesses of the method of data collection, for instance, if they are effective and if the language used is appropriate for the audience. For instance, a too-long questionnaire may bore the participants, and they thus may not finish answering the required questions. One that uses language that is too technical may also be too difficult for the participant to use. The questionnaires are then used for the main study in the collection of the data. The questionnaires will have both open and closed questions for the respondents to fill. A sampling of internet users will be done from both male and female, beginners, and expert level of participants. The reason for picking the respondents who are beginners in internet use and also the experts from the IT department is so that the data collected can have variety. For instance, the reaction to a malicious attack by a novice is different from an expert in the field. By comparing the two, the researcher will understand if training is efficient in identifying, malicious use of AI. Through the use of a visiting expert in the subject, the research can benefit from an expert seeking to investigate a common problem. In lieu of the problem, the solutions that are found can be used to teach the public on awareness and also help them in mitigating the risk.     3.4 Data Analysis   Sanz, (2013), data analysis is the bane of any research in risk mitigation. This means that the data collected is evaluated systematically to realize the relationship between the components. Through the use of quantitative methods, the analysis also calls for data analysis that is quantitative as it can be used to deal with more than one entry. For instance, the information used in this study is not only forming the survey questionnaires but also from databases, books, and academic literature in the field of AI. The secondary data is easily available and  also affordable to use.  In measuring the progress of the projects, there is, therefore, a need to identify a leader and persons that will be accountable for each stage of the project up to the data analysis. For instance, if working as a group, some members of the group can be made responsible for the creation of the questionnaire and in administering the survey questionnaires while another group can analyze the data collected. The third group can then make deductions and make a report that will then be used to inform the public of the purpose of the study and the conclusions made. The idea here is to inform the public of mitigation recommendations. When working as a lone researcher, the work is divided into measurable tasks that will be accomplished in a specific time frame.                     3.5 Data Evaluation Data evaluation is the ultimate step as it provides solutions to the problem. The research data is scrutinized in relation to the objectives of the study. The data evaluation is this case will evaluate the data collected to use it to find out how artificial intelligence is used maliciously and in which forms it is used. The analysis of data will also be used to evaluate the effects of malicious use of AI on internet users in Plymouth University. The third use of the data that will be analysed is in assessing how ready the university is technologically in dealing with malicious usage of AI and the impact of the usage. The results realized in evaluation of the data will enable me to identify the specific solutions to specific problems realized through analysis of the data. Thus I will easily examine mitigation techniques of malicious artificial intelligence attacks.   3.6 Dissemination and Exploitation The dissemination of the information collected will be done before the route of transfer is chosen. Since the public involved here are students and the staff of Plymouth university, the best transfer of knowledge can be in the form of academic course or public information through pamphlets which can be distributed around campus. Another method would be through a digital platform seeing as internet users are highly in use of the internet. According to Olhede and Wolfe (2018), there is a need for mechanisms in place for identification, protection, and subsequent exploitation of any exploitable results which may arise from the research. This means that the research has to be conducted in certain moral standards. For instance, the consent of the respondent is required before the study even begins. Also, the confidentiality and privacy of the respondent have to be ensured. Therefore the respondent should be made aware that no information they give will be leaked to outside sources. For instance, if a respondent has deliberately used an AI system maliciously, then they should not be divulged or castigated as the research is not the right place for that, rather it is specifically for data collection purposes. To protect the respondent form this kind of risk, the questionnaire given must ensure anonymity where the identity of the respondent is not revealed. Lastly, the respondent may choose to withdraw from the study, which should be allowed to occur if they so wish. Deception is not allowed where the researcher misrepresents facts related to the consequences of the study.    4.0 Conclusions and Recommendations Solutions  The research will identify possible recommendations as the threat of AI-based malicious attacks is a problem. In such a threatening environment, one of the solutions is in practicing intentional attacks so that companies can be aware of risks and give insight into risk management in AI. The use of algorithms and data sets may produce results on how a system can act in the instance if an attack; thus, to mitigate such attacks may require prior preparation. The other method is in disclosing potential vulnerabilities of systems to the users so that they do not reveal vital information. It is also important so that appropriate action n can be applied in the protection of clients. Thirdly, a good defence is always in knowledge as the AI can be educated to identify malicious attacks. By detecting intrusions, there is a possibility to identify and counterattack a problematic AI and achieve cybersecurity. Informing individuals like Netflix users of the risk of the application can go a long way in mitigating attacks as the users are aware of practices that can make them vulnerable. The users can also protect their data through the use of Virtual Private Networks (VPN), especially where public networks like Wi-Fi is used. The use of strong passwords and being wary of links is also prudent in protecting information from hackers and phishing emails.  In conclusion, Digitally, technological developments can be used for both positive and negative commitments. AI is thus not immune to malicious use; therefore, there should be capabilities of AI that can be developed to protect systems from malicious users. This section will provide a summary of the findings of the study and give a future projection of the use of artificial intelligence.                                          5.0 References Bezobrazov, S., Sachenko, A., Komar, M. and Rubanau, V., 2016. The methods of artificial intelligence for malicious applications detection in Android OS. International Journal of Computing15(3), pp.184-190. Chen, H., Chung, W., Xu, J.J., Wang, G., Qin, Y. and Chau, M., 2004. Crime data mining: a general framework and some examples. computer, (4), pp.50-56. Chen, M. and Leung, V.C., 2018. From cloud-based communications to cognition-based communications: A computing perspective. Computer Communications128, pp.74-79. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B. and Anderson, H., 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228. Dilek, S., Çakır, H. and Aydın, M., 2015. Applications of artificial intelligence techniques to combating cyber crimes: A review. arXiv preprint arXiv:1502.03552. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F. and Schafer, B., 2018. AI4People—An ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines28(4), pp.689-707. Gambs, S., Killijian, M.O. and del Prado Cortez, M.N., 2014. De-anonymization attack on geolocated data. Journal of Computer and System Sciences80(8), pp.1597-1614. Gasser, R. and Huhns, M.N., 2014. Distributed artificial intelligence (Vol. 2). Morgan Kaufmann.   Gudivada, V.N., Irfan, M.T., Fathi, E. and Rao, D.L., 2016. Cognitive analytics: Going beyond big data analytics and machine learning. In Handbook of statistics (Vol. 35, pp. 169-205). Elsevier. Hill, J., Ford, W.R. and Farreras, I.G., 2015. Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in human behavior49, pp.245-250. Khanna, A., Pandey, B., Vashishta, K., Kalia, K., Pradeepkumar, B. and Das, T., 2015. A study of today’s ai through chatbots and rediscovery of machine intelligence. International Journal of u-and e-Service, Science and Technology8(7), pp.277-284. Makar, M.G. and Tindall, T.A., DELFIN PROJECT Inc, 2014. Dynamic chatbot. U.S. Patent Application 14/287,815. Nilsson, N.J., 2014. Principles of artificial intelligence. Morgan Kaufmann. Olhede, S. and Wolfe, P., 2018. The AI spring of 2018. Significance15(3), pp.6-7. Ramos, C., Augusto, J.C. and Shapiro, D., 2008. Ambient intelligence—the next step for artificial intelligence. IEEE Intelligent Systems23(2), pp.15-18. Russell, S.J. and Norvig, P., 2016. Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited,. Sanz, B., Santos, I., Nieves, J., Laorden, C., Alonso-Gonzalez, I. and Bringas, P.G., 2013, June. Mads: malicious android applications detection through string analysis. In International Conference on Network and System Security (pp. 178-191). Springer, Berlin, Heidelberg. Sengupta, S., 2017, May. Moving Target Defense: A Symbiotic Framework for AI & Security. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems (pp. 1861-1862). International Foundation for Autonomous Agents and Multiagent Systems. Tyagi, A., 2016. Artificial Intelligence: Boon or Bane?. Available at SSRN 2836438. Veiga, A.P., 2018. Applications of artificial intelligence to network security. arXiv preprint arXiv:1803.09992.  

9. Resources required and their Justification

In this section (maximum 1/4 side of A4) you should describe the need for the resources you are requesting. This includes technician support etc

7. Work Plan

Maximum of 1 A4 page

  Work Package 1: Title : Organization and preparation of data collection Details Prepare the survey questionnaire Prepare the consent form for respondents Administer the pilot study Writing the literature review section after collection of sources     Work Package 2: Title : Data collection Details Organize the whole process of data collection and drafting the Methodology chapter. Drafting the survey design –preparing survey Field work and collection of primary data by administering the questionnaire online Ensure the questionnaire responses are monitored for the specific number of respondents Identify hitches in the data collection process Organization of collected data       Work Package 3: Title : Data Analysis Details Evaluation of collected data through data Analysis Drafting the results and discussion chapter Drafting conclusion and discussion chapter Final revisions, graphs and appendixes charted; bibliography completed Prepare the pamphlets for public information  


Gantt chart:

TASK Malicious use of artificial intelligence  in cyber securityMay 18th – May  25  May 25th –May 31stJun 1st –Jun 9thJun 10th – Jun 11thJun 12th– Jun 15thJun 16th– Jun 23rdJune 24th –Jun 30thJul 1st– Jul 5thJul 6th– Jul 19thJul 20th-Jul 24thJul 25th– Jul 31stAug 1st – Aug 15thAug 16th-Aug 22ndAug 23rd –Aug 31st
Gathering the relevant literature, conducting library research and examining the selected sources              
Drafting the literature review              
Preparing the survey Questionaire              
Preparing consent form/ request permission from school              
Administer the pilot study to a few respondents              
Writing the literature review section after collection of sources              
Drafting the Methodology chapter.              
Drafting the survey design –preparing survey              
Field work and collection of primary data              
Administer the online questionnaire              
Monitor questionnaire for specific respondents              
Organize the collected data              
Data Analysis and drafting the results and discussion chapter              
Drafting conclusion and discussion chapter              
Final revisions, graphs and appendixes charted; bibliography completed              

Do you need high quality Custom Essay Writing Services?

In case you would like to know more of forex trading please check it out

Order now

 
  
%d bloggers like this: