As a research and technology company, we understand the profound impact our innovations can have on society. So, we have undertaken a holistic approach, drawing from a range of sources around the world.
These sources include government policies, academic reports, international human rights declarations, and technology guidelines.
Dort Studio has identified key affinities and common themes. These insights have led us to formulate our own set of guiding principles that reflect our commitment to building AI and automated systems that are safe, societal, privacy-protected, environmental, unbiased, and explainable.
Safe
We emphasise the need to prioritise safety and avoid harm caused by AI or automated processes. Our systems should not pose unreasonable safety risks and adopt safety measures that proportionately mitigate the size of the potential risks.
Societal
Underscores our commitment to developing AI systems that contribute positively to society. We use AI to be used for good and in an inclusive manner. This includes services such as health, education, housing security and sustainable communities. Underpinned by the United Nations Universal Declaration of Human Rights.
Privacy Protection
Acknowledges the importance of protecting individuals' personal information and aligns with existing data integrity laws in Australia. Dort Studio has also taken a proactive approach to data integrity and privacy laws in Australia by attending the first AI and Data Integrity courses at UTS, lectured by influential policymakers and academics who help form policies and laws in Australia.
Environmental
Recognises the need for sustainable practices in AI and automation. We systematically evaluate the energy consumption of our technical solutions, while assessing their performance. Our projects will reduce waste and promote circular economy practices. We’ll use predictive analytics to reduce consumption and congestion. Underpinned by the United Nations Sustainable Development Goals.
Unbiased
We avoid bias by promoting diverse perspectives throughout the process. We do this by seeking models with diverse training to avoid dataset biases that can perpetuate cultural bias. Hiring staff from diverse backgrounds, disciplines and demographics. Ensuring our research participants are recruited with an inclusive criteria. Underpinned by the CSIRO’s Responsible AI practices.
Explainable
We promote transparency and explainability when creating new solutions. We proactively prevent harmful content, disinformation, and risky emergent behaviours. We monitor economic impacts and overreliance. Our models are carefully selected. The models we train are monitored and maintained. We document decisions about trade-offs.
Beyond these principles, our operations ensure human oversight to align the technology to human values and ethical standards.
- Community consultation fosters collaboration and ensures that AI benefits from diverse perspectives.
- Incorporating diverse human training to avoid biases
- Involve diverse perspectives on algorithms to improve system performance.
- Ensuring principles are fine-tuned as models evolve.
- Identify accountable parties, roles and responsibilities within an organisation.
This is our first post in a series of posts about these principles and our operations.

