AI for Agriculture

Sanjivani

Instant crop diagnosis for the 2G world. Edge-first AI powered by MobileNetV2, built for real farms—not fast internet.

Model Accuracy

98.2%

Validated across 15 different crop disease classes (Tomato, Potato, Pepper) using a custom MobileNetV2 architecture.

2s
Diagnosis Time (vs 3 Days)
Offline
PWA Capabilities
PyTorch
Flask
React
MobileNetV2
PWA
IndexedDB

The Problem

Most farmers rely on delayed expert visits or guesswork, often losing days and yields. Existing AI tools assume reliable internet and high-end devices—luxuries that rural agriculture rarely has.

The Solution

Sanjivani brings Expert AI to the edge. Farmers capture a leaf image and get instant actionable insights within seconds, even in offline or low-bandwidth conditions.

Key Capabilities

Edge AI Detection

Custom MobileNetV2 model covering 15+ disease classes for Tomato, Potato, and Pepper.

Ultra-Fast Inference

Sub-2-second diagnosis replaces multi-day consultation delays.

Offline-First PWA

Functional without active internet. Syncs data automatically when connectivity returns.

Mobile Optimized

Designed for low-end Android devices with minimal memory and compute footprint.

Local Caching

IndexedDB ensures zero data loss during network failures or outages.

Actionable Insights

Provides immediate treatment recommendations alongside disease identification.

Engineered for the Edge.

The core challenge wasn't just accuracy—it was accessibility. Rural farmers have zero latency tolerance and spotty connection.

We utilized a "Store-and-Forward" architecture. Images are cached locally in IndexedDB and predictions are served immediately from the device's model or a lightweight local server, syncing only when possible.

AI Layer

MobileNetV2 (Fine-tuned) + PyTorch.

Backend

Flask API + OpenCV Pre-processing.

Frontend

React PWA + Service Workers.

inference_engine.py
@app.route('/predict', methods=['POST'])
def predict():
# 1. Validate Input
if 'file' not in request.files:
return jsonify({'error': 'No file'})

# 2. Pre-process for MobileNet
img_bytes = request.files['file'].read()
tensor = transform_image(img_bytes)

# 3. Inference Time
prediction = model(tensor)
confidence = torch.softmax(prediction)

return jsonify({
'class': class_names[prediction.argmax()],
'confidence': confidence.item()
})
Back to HomePart of the 2024 Portfolio