AI Safety and Trustworthiness: A Survey
Keywords:
AI, Safety, Trustworthiness, Academics, FrameworksAbstract
Artificial intelligence (AI) is now incorporated into many important areas of concern, its choices and actions may directly affect people's lives in a variety of fields, including healthcare, economics, education, and even government. The rapid adoption of AI raises questions about safety, dependability, and credibility despite some of its incredible skills in automation, pattern recognition, and problem-solving. The topic of AI safety has become a global concern because to unintended outcomes, including bias, adversarial resistance, a lack of transparency in decision-making, and irrelevance to human values. The safety and reliability of AI will be discussed in this article from a technical, ethical, and governance standpoint. It investigates how to create AI systems that consumers and other stakeholders can trust by utilizing robustness, security, transparency, fairness, and accountability. By examining the present frameworks, research, and legislation, the study identifies the issue of striking a balance between innovation and safety. It also suggests the path of future study, such as the accountability of AI implementation, human-centered development, and interdisciplinary collaboration. The need to create safe and reliable AI is discussed in the paper as both a technological and, more importantly, a socio-ethical issue that calls for cooperation from academics, business, government, and civil society.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License



