1.   Home
  2. Automotive
  3. Artificial Intelligence and Machine Learning


Artificial Intelligence and Machine Learning

In today’s world, AI and ML have proliferated every conceivable industry. The emphasis is squarely on data and with tons of data being available, it is a great business disadvantage to not use the data to derive actionable insights to improve operational efficiency and achieve accelerated business growth. Tremendous amounts of data are being generated by businesses from social media platforms, IoT Devices, Websites and Smartphones. The usage of age-old traditional methods to process such large volumes of information is no longer feasible. New Technologies based on Artificial Intelligence and Machine Learning are revolutionizing businesses. Technologies for real-time data ingestion, video and image analytics, virtual assistants, text to speech and speech to text are providing unparalleled valuable insights.

Over the last decade, the cost of hardware to process huge quantities of data has plummeted and so has the cost of storage. Cloud Frameworks like AWS, Microsoft Azure and Google AI have made it convenient for business to deploy their operations to cloud platforms and leverage the benefits offered by advanced hardware capabilities with a cost advantage and not having to maintain infrastructure locally.

AIML

What we do

With decades of solid and proven expertise in the Aerospace, Digital Technology and Automotive Verticals, Accord has proven capabilities to infuse solutions in these verticals with AI and ML technologies to enable customers to reap the vast benefits of data insights. Accord’s expertise in AI/ML technologies covers the entire spectrum starting from traditional AI/ML techniques to the very latest in Deep Learning and Natural Language Processing.




Accord offers AI/ML services in the following areas:


deep_learning_apps
Deep Learning Apps
image_video_analytics
Image/Video Analytics
Data analytics
Data Analytics
image_processing
Image Processing

mi_model_development
ML Model Development
embedded_system_development
EDGE Embedded System Deployment
nlp
NLP
ai_service
AI as a Service


Accord’s core competencies are in the following areas:

deep_learning_2
Deep Neural Networks and Convolutional Neural Networks
machine_learning
Machine Learning Algorithm Design and Development
traditional_image
Traditional Image Processing Algorithms and Applications
edge_computing
EDGE Computing Leveraging CUDA and cuDNN
logistic
Logistic Regression and Neural Network based Data Analytics
natural_language
Natural Language Processing – Text Recognition




  • Machine Learning and Deep Neural Networks
  • Image Processing
  • EDGE Computing
  • Logistic Regression
  • Natural Language Processing
  • AI/ML Project Lifecycle
  • Tools and Technologies

Machine Learning and Deep Neural Networks

Many Machine Learning Techniques are most suited to solve real item problems and for gathering insights. Machine consists of supervised, unsupervised and reinforcement learning techniques. Suitable ML algorithms are applied based on the use case – Linear Regression, Logistic Regression, Decision Tree, SVM, Naïve Bayes, KNN, M-Means etc.

Artificial Neural Networks, Deep Neural Networks and Convolutional Neural Networks are modelled after the network formed by the neurons in the human body. They are based on the principle of conduction of nerve impulses from the source of stimulus to the brain centres for decoding and processing of the stimulus. Artificial Neural Networks use several layers of neurons that are fully interconnected, and propagation of information is controlled by weights.

Multi-Layer Perceptrons, Feed Forward Networks, Convolutional Neural networks, Recurrent Neural Networks, Long Short-Term Memory Networks are different types of neural networks used in several applications like image analytics/classification, Speech and Text Recognition, Univariate and Multivariate predictions etc.

Accord has successfully featured several applications with the extensive use of different types of Neural Networks. Techniques for optimization and compression of neural networks, network ensembling and transfer learning have been used for achieving best results.

machine_learning_bg

Image Processing

With the availability of specialized hardware for processing images as a basic unit of computation, image processing applications can be very handy in extracting valuable insights from a variety of input images using traditional and handcrafted techniques. Traditional methods gain significance when the input space is very generic in nature and it is not possible to adequately train neural networks. Some of the image processing functions carried out using traditional means are:

  • Edge Detection
  • Contrast Enhancement
  • Sharpness control
  • Zoom Control
  • Panaroma Generation
  • Scene or template matching
  • Object Tracking

Edge Detection

Edge detection is used to detect the location and presence of edges in images by making changes in the intensity of an image. Many operations are used to detect edges in images. Edge detection is mainly used in object identification, feature recognition, pattern recognition and image segmentation. It is a filter that extracts the edge point within an image. When an edge occurs, there isa sudden change in the pixel values and this is the basis for determining edges. Edge Detection forms the basis for further image analysis like connected component analysis that can be used for image segmentation. Edge detection and segmentation have been as one of the approaches in implementing an Automatic License Plate Recognition System


Image Enhancement

Image enhancement can be considered as one of the fundamental processes in image analysis. The goal of contrast enhancement is to improve the quality of an image to become more suitable for a particular application. Numerous image enhancement methods are in use and efforts have been directed to further increase the quality of the enhancement results and minimize the computational complexity, memory usage.

Histogram transformation is one of the basic image enhancement techniques used today. It facilitates subsequent higher-level operations such as detection and identification. Histogram equalization is contrast enhancement technique in the spatial domain in image processing using the histogram of image. Histogram equalization generally varies the global contrast of the image. This method is most suited for images that are bright or dark.

Some of the Histogram Techniques that are used in Image Pre-processing are:

  • CLAHE – Contrast Limiting Adaptive Histogram Equalization
  • BBHE - Brightness Preserving Bi-Histogram Equalization
  • DSIHE - Dualistic Sub-Image Histogram Equalization
  • Quantized Histogram Equalization
image_enhancement

In image terminology, noise refers to the pixels that give an image brightness and colour variation- the blurring technique is applied on image to smoothen out or to remove the pixels that make the image appear excessively bright sharp/clear, hence making the image appear fuzzy and unclear to the human eye. Image Sharpening does the opposite. It modifies a blurry image so that features are enhanced, and the processing of the image becomes easier.

Image zooming is a task of applying certain transformations to an input image so as to obtain a visually more detailed, or less noisy output image where the overall image size increases. In order to have better and fine images for processing, images often need to be zoomed in and out or reproduced to higher resolution from a lower resolution. This is a lossless process. In contrast, Image resizing increases or decreases the total number of pixels subject to loss of detail.

image_enhancement_2

Panorama Generation

The method of integrating many images with overlapping fields of view to create a segmented panorama or high-resolution cumulative image is known as image stitching or Panorama Generation. Although images can have varying degrees of overlap, in a given optical system, the overlap between component images is often fixed thereby simplifying the panorama generation process.

Computer vision and image processing methods like key point detection, local invariant descriptors (such as SIFT: Scale Invariant Feature Transform and SURF: Speeded-Up Robust Features), key point matching and perspective warping are used to create panorama images.

panarama_generation

Scene or template matching

The process of "image matching" involves finding enough pixels or area correspondences between two or more images of the same scene.

Template Matching involves the following steps:

Detection: Identify the Interest Points. Each feature point's local appearance is characterized in some form that is unaffected by variations in lighting, translation, scale, and in-plane rotation. For each feature point, we typically end up with a descriptor vector.

Matching: To find features that are comparable throughout the images, descriptors are compared. Interest Points are points that have an expressive texture. The intersection of two or more edge segments, or the place where the direction of the object's border rapidly changes, is considered as an interest point.

scene_matching_1
scene_matching_2


Object Tracking

Object tracking is one of the main tasks associated with Video Analytics. The aim of object tracking is to provide an insight that objects identified in a series of image frames are the same, or alternately recognize that new objects have entered the scene. Tracks are created for objects found in successive frames and an unique track ID is assigned to them. New Tracks are created for objects encountered afresh. Tracks are deleted when objects drop from successive frames.

Object Tracking is useful in several applications like Surveillance Systems. There are broadly two types of object tracking – single and multiple object tracking. The specific algorithm used will depend on the type of tracking needed.

Kalman Filter, Extended Kalman Filter and Linear Kalman Filter are some of the popularly used tracking algorithms. Correlation based filters like KCF (Kernelized Correlation Filters) Tracker, CSRT (Channel and Spatial Reliability Tracker) are other widely used choices.

object_tracking

EDGE Computing

Accord specializes in deployment of AI/ML applications on EDGE devices. Applications are designed on Ampere Architecture NVIDIA EDGE server machines utilizing NVIDIA application execution pipeline and Deep-Stream frameworks so as to extract the maximum performance of the GPU System. Accord has been able to generate applications that utilize as many as 5 neural networks that run concurrently on 90 or more video sequences, delivering about 3600 FPS.

With this design methodology, the real world performance on EDGE devices (like NVIDIA Volta architecture devices) can be quite accurately estimated.

A typical Computer Vision based Deep-Stream application pipeline is shown below, image courtesy: NVIDIA.

edge_computing_2

Logistic Regression

Accord has used Linear and Logistic Regression Machine Learning Techniques to solve prediction and classification problems. Linear regression and logistic regression are both methods for modelling relationships between dependent and independent variables. Both the approaches are used to build statistical models but to do different tasks. Linear regression models linear relationships, while logistic regression models binary outcomes (i.e., whether or not an event happened). Linear Regression models the data so as to fit a straight line while logistic regression fits a polynomial curve like the sigmoid curve.

Multi-Layer Perceptrons (Neural Networks) can also be developed to determine relationship between variables and predict an outcome.

The regression techniques have been used in determining State of Charge of a Rechargeable Lithium-Ion cell when the profiling data from previous charge discharge cycles were available. In this case, a relationship was established between the variables charche cycle number, current cell voltage, current draw and the resultant state of charge.

Logistic_Regression

Natural Language Processing

Natural Language Processing involves making computers understand human languages both in the form of text as well as speech. This requires building of human-like intelligence into software using special neural networks and libraries.

We can make Software make sense out of structured data in spreadsheets and the tables in the a database. Human languages belong to the unstructured category of data and interpreting this unstructured data is the goal of Natural Language Processing.

AI/ML Project Lifecycle

Accord has experience working on all stages of the AI/ML product development lifecycle. The typical lifecycle of an AI/ML solution is illustrated below. As emphasized earlier, the product development begins with data acquisition and a model is generated to learn from the data to make predictions. The final model is deployed on target systems. AI/ML differs from traditional Programming in that the code is not the most important aspect in the lifecycle, it is data. In AI/ML, Data takes up the position of source code of traditional programming.

With more than 3 decades of experience in Aerospace Safety Critical Product Development, the design and development rigor required in Aerospace products is built-into application development involving Artificial Intelligence.

aimlproject_lifecycle

The W Model (Image Courtesy: EASA) shown is typically useful for most of the AI/ML applications and provides adequate design assurance for models generated.

wmodel

Tools and Technologies

Accord has explored the length and breadth of tools and technologies in the AI/ML space and has successfully completed several projects and proofs of concept. The latest tools and technologies for being used for state-of-the-art AI/ML applications.

tools_technologies

Contact Accord for any needs in Implementing Software and Solutions involving Artificial Intelligence, Machine Learning and Data Analytics.