How to Build Software Powered By Artificial Intelligence

The Essence of Developing Software with AI Capabilities

Development of software with AI capabilities implies building new software or evolving existing software to output AI analytics results to users (e.g., demand prediction) and/or trigger specific actions based on them (e.g., blocking fraudulent transactions).

Supported by AI, an application can automate business processes, personalize service delivery and drive business-specific insights. According to Deloitte, 90% of seasoned AI adopters say that “AI is very or critically important to their business success today”.

Use Cases for Software with AI Capabilities

Business process automation
  • Chatbots
  • Search engines
  • Automated document generation
  • Optical character recognition engine for data extraction from paper documents
  • Job candidates screening and shortlisting
Production management
  • Predictive maintenance
  • Demand and throughput forecasting
  • Process quality prediction
  • Production loss root cause analysis
Customer analytics
  • Sentiment analysis
  • Customer behavior prediction
  • Sales forecasting
Risk management
  • Counterparty risk analytics
  • Potential damage prediction
  • Fraud detection
Supply chain management
  • Demand forecasting
  • Lead time forecasting
  • Inventory optimization
Personalized service delivery
  • Customer segmentation
  • Recommendation engines

Roadmap: Developing Software with AI Capabilities

The duration and sequence of the development stages will depend on the scale and the specifics of both basic software functionality and artificial intelligence you want to enrich it with. Below we present a generalized process outline based on ScienceSoft’s 32-year experience in software development and data science.

01
Feasibility study
Duration: 1 month
  • Outlining high-level software requirements (in case of new software).
  • Creating a proof of concept (PoC) for AI to check the technical and economic feasibility of enriching software with it, estimate the scope of work, timeline, budget, and risks.
  • Calculating ballpark ROI of AI implementation.

Britapp's best practice: To save on time and budget resources and increase the ROI of AI, we deliver a PoC to uncover possible AI-related roadblocks, such as low-quality data, data silos, data scarcity.

02
Business analysis to elicit AI requirements
Duration: 1-6 weeks

Defining detailed functional and non-functional requirements to AI, such as the required level of AI accuracy (in some cases, the value can be driven already with 65-80% of accuracy), explainability, fairness, privacy, and the required response time.

Britapp's best practice: When choosing a machine learning model AI will leverage, we carefully consider the trade-offs between requirements to AI (as, for example, some models can be less accurate but more explainable and fair).

03
Solution architecture design
Duration depends on the overall complexity of software functionality

Selecting integration patterns and procedures. Designing the architecture of the solution with integration points between its modules, including integration with an AI module.

04
Business processes preparation (in case of software development for internal use)
Duration: 1-3 months

Launching an initiative of integrating AI in business-critical software may require organizational changes to increase the chances for its successful implementation and adoption:

  • Shifts in data policies to break down data silos across the departments to enable easy access to data and avoid duplicated or contradicting data that decreases AI accuracy.
  • Determining a plan on adapting employees’ workflows to the use of updated (or new) software (e.g., user training and refreshed user guides and policies).
  • Promoting continuous collaboration between business and tech stakeholders.
05
Software development (non-AI part)
Duration: 3-36 months

Developing the front end and the back end of software (the server side and APIs, including necessary APIs for AI module integration). Running QA procedures throughout the development process to validate software quality.

06
AI module development
1. Data preparation

Duration: 1-2 weeks (this process can be reiterated to increase the quality of AI deliverables)

  • Consolidating data from relevant data sources (internal and external, which can be acquired via one-time purchase or a subscription).
  • Performing exploratory analysis on data to discover useful patterns in it, detect obvious errors, outliers, anomalies, etc.
  • Cleansing data: standardizing, replacing missing or deviating variables, removing duplicates, and anonymizing sensitive data.
  • The resulting data is split into training, validation and test sets.

Britapp's best practice: To significantly streamline this time-consuming stage, we use automation tools (e.g., Trifacta, OpenRefine, DataMatch Enterprise, as well tools within leading AI cloud platforms – Amazon SageMaker, Azure Machine Learning, Google AI Platform).

2. ML model training

Duration: 1-4 weeks (depending on the model’s complexity)

Selecting fitting machine learning algorithms and building ML models. The models are trained with training data and tested against a validation dataset, then their performance is increased by fine-tuning hyperparameters. The most high-performing models can be combined into a single model to decrease the error rate of separate models. The final ML model is validated against a test dataset in the pre-production environment.

07
AI deployment
Duration: 2-4 weeks

The configuration of the AI deployment infrastructure and approach to integrating AI into software depends on how AI should output results:

  • In batches: AI outputs are cached according to pre-scheduled time intervals. Targeted software retrieves AI outputs from the data storage it is connected with. Higher latency is acceptable.
  • As a web service: near-real-time outputs triggered by a user or a system request via API. Low latency is required.

Pilot deployment to a limited number of software users is recommended to verify the smoothness of AI integration with target software and compatibility with the infrastructure (latency, CPU and RAM usage) and run user acceptance tests to handle possible issues before a full-scale rollout.

ScienceSoft's best practice: To accelerate the AI deployment, in our projects we leverage leading AI cloud platforms – Amazon SageMaker, Azure Machine Learning, Google AI Platform.

08
Maintenance and evolution of AI-powered software

Tracking and fixing software bugs and issues of integration with AI, optimizing software performance and enhancing UI based on user feedback, developing new features or extending AI-enabled functionality drawing on evolving business or user needs.

Maintenance of AI is a separately controlled process. It includes monitoring of ML model performance to detect a ‘drift’ (decreasing accuracy and increasing bias when the data that AI processes grows and starts deviating from the initial training data).

In case of the drift, models should be retrained with new hyperparameters or newly engineered features reflecting shifts in data patterns. They can also be replaced by challenger models with higher performance (identified during A/B testing).

up