Development of software with AI capabilities implies building new software or evolving existing software to output AI analytics results to users (e.g., demand prediction) and/or trigger specific actions based on them (e.g., blocking fraudulent transactions).
Supported by AI, an application can automate business processes, personalize service delivery and drive business-specific insights. According to Deloitte, 90% of seasoned AI adopters say that “AI is very or critically important to their business success today”.
The duration and sequence of the development stages will depend on the scale and the specifics of both basic software functionality and artificial intelligence you want to enrich it with. Below we present a generalized process outline based on ScienceSoft’s 32-year experience in software development and data science.
Britapp's best practice: To save on time and budget resources and increase the ROI of AI, we deliver a PoC to uncover possible AI-related roadblocks, such as low-quality data, data silos, data scarcity.
Defining detailed functional and non-functional requirements to AI, such as the required level of AI accuracy (in some cases, the value can be driven already with 65-80% of accuracy), explainability, fairness, privacy, and the required response time.
Britapp's best practice: When choosing a machine learning model AI will leverage, we carefully consider the trade-offs between requirements to AI (as, for example, some models can be less accurate but more explainable and fair).
Selecting integration patterns and procedures. Designing the architecture of the solution with integration points between its modules, including integration with an AI module.
Launching an initiative of integrating AI in business-critical software may require organizational changes to increase the chances for its successful implementation and adoption:
Developing the front end and the back end of software (the server side and APIs, including necessary APIs for AI module integration). Running QA procedures throughout the development process to validate software quality.
Duration: 1-2 weeks (this process can be reiterated to increase the quality of AI deliverables)
Britapp's best practice: To significantly streamline this time-consuming stage, we use automation tools (e.g., Trifacta, OpenRefine, DataMatch Enterprise, as well tools within leading AI cloud platforms – Amazon SageMaker, Azure Machine Learning, Google AI Platform).
2. ML model trainingDuration: 1-4 weeks (depending on the model’s complexity)
Selecting fitting machine learning algorithms and building ML models. The models are trained with training data and tested against a validation dataset, then their performance is increased by fine-tuning hyperparameters. The most high-performing models can be combined into a single model to decrease the error rate of separate models. The final ML model is validated against a test dataset in the pre-production environment.
The configuration of the AI deployment infrastructure and approach to integrating AI into software depends on how AI should output results:
Pilot deployment to a limited number of software users is recommended to verify the smoothness of AI integration with target software and compatibility with the infrastructure (latency, CPU and RAM usage) and run user acceptance tests to handle possible issues before a full-scale rollout.
ScienceSoft's best practice: To accelerate the AI deployment, in our projects we leverage leading AI cloud platforms – Amazon SageMaker, Azure Machine Learning, Google AI Platform.
Tracking and fixing software bugs and issues of integration with AI, optimizing software performance and enhancing UI based on user feedback, developing new features or extending AI-enabled functionality drawing on evolving business or user needs.
Maintenance of AI is a separately controlled process. It includes monitoring of ML model performance to detect a ‘drift’ (decreasing accuracy and increasing bias when the data that AI processes grows and starts deviating from the initial training data).
In case of the drift, models should be retrained with new hyperparameters or newly engineered features reflecting shifts in data patterns. They can also be replaced by challenger models with higher performance (identified during A/B testing).