It is natural for any decision-making system, whether man or machine, to be questioned for the legality of, and ethics in its decision-making. Ethics in AI is gaining prominence with the increasing use of AI enabled decision support systems, necessitating AI solutions’ legal and ethical compliance. Cases of flawed AI systems, whose predictions in individual context have had ethical implications, have only added to the urgency of answering the ethical questions around AI decisioning. The September 2018 edition of HOLMES Advisory Board discussed ethics and explainability in AI. Although we lack an industry-wide common standard for the same, Wipro HOLMES ETHICA Framework for AI solutions – where ETHICA represents ‘Explainable, Transparent, Human-First, Interpretable, Common Sense and Auditable’, can be a guiding light. The recent intense discussion regarding GDPR compliance further goes to prove that although ethics in AI is only in exploratory stage, awareness on the issue is high. Another debate happens around the reproduction of the creator’s behaviour in an AI system, placing the needle of focus back on who defines the rules that the system would follow. The cultural differences factoring into ethical debate also puts a regional flavour to conundrum. An action that is correct within the regulatory framework may not be right from a societal point of view. Ethics can make or break AI adoption anywhere – a region, a society or an enterprise, which justifies the attention it is receiving.