Skip to content

Commit

Permalink
Github pages attempt #1
Browse files Browse the repository at this point in the history
(Hopefully the last)
  • Loading branch information
7Zenox committed Nov 28, 2023
1 parent 964e321 commit 5a8564d
Show file tree
Hide file tree
Showing 56 changed files with 927 additions and 412 deletions.
2 changes: 1 addition & 1 deletion frontend/.gitignore → .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -32,4 +32,4 @@ yarn-error.log*

# typescript
*.tsbuildinfo
next-env.d.ts
next-env.d.ts
File renamed without changes.
67 changes: 31 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,31 @@
# Ayre

> *"... I'll support you!"*
Ayre is a Visual Question Answering project created by members of [this organization](https://github.com/projectayre) as part of our semester V projects and coursework during our Bachelor's degree. This is primarily a research based project, since Visual Question Answering with Sentiment Analysis is a significant research gap that we would like to address. This project will feature several components coming together as a complete full-stack AI project.

## Planned/Implemented features [checklist]

### AI

All AI development to done using Torch, for better support with transformers.

- [x] ~~Sentiment Analysis on images using integrating fuzzy logic.~~
- [ ] NEW: Semantic Image Segmentation for Improved Understanding of the image.
- [ ] Visual Question Answering task using data from sentiment analysis.
- [ ] At least 1 Journal and 1 Conference paper based on novel approaches made here.
- [ ] MLOps pipelines that can be easily imported and deployed inside the backend.
- [ ] (Optional) Github Packages Integration

### Frontend

A simple, beautiful UI/UX with the following features:

- [ ] Responsive Design for accessibility.
- [ ] Light/Dark themes.
- [ ] OAuth integration, with some of the benefits.
- [ ] Deployment on Vercel/Github Pages.
- [ ] (Optional) A Loading screen.

### Backend

- [ ] Minimal and fast backend created in Django.
- [ ] Database connectivity for live storage of queries and information.
- [ ] Modular design for improved CI/CD.
- [ ] Full cloud hosting/self-hosting of the backend.
- [ ] (Optional) Scalability Options.
# Next.js & NextUI Template

This is a template for creating applications using Next.js 13 (app directory) and NextUI (v2).

## Technologies Used

- [Next.js 13](https://nextjs.org/docs/getting-started)
- [NextUI v2](https://nextui.org/)
- [Tailwind CSS](https://tailwindcss.com/)
- [Tailwind Variants](https://tailwind-variants.org)
- [TypeScript](https://www.typescriptlang.org/)
- [Framer Motion](https://www.framer.com/motion/)
- [next-themes](https://github.com/pacocoursey/next-themes)

## How to Use

### Install dependencies

```bash
npm install
```

### Run the development server

```bash
npm run dev
```

## License

Licensed under the [MIT license](https://github.com/nextui-org/next-app-template/blob/main/LICENSE).
File renamed without changes.
8 changes: 5 additions & 3 deletions frontend/app/blog/layout.tsx → app/algorithm/layout.tsx
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
export default function BlogLayout({
import "@/styles/globals.css";

export default function AlgorithmLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<section className="flex flex-col items-center justify-center gap-4 py-8 md:py-10">
<div className="inline-block max-w-lg text-center justify-center">
<div className="inline-block max-w-4xl text-center justify-center">
{children}
</div>
</section>
);
}
}
79 changes: 79 additions & 0 deletions app/algorithm/page.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
'use client;'

import { Image, Spacer, Text } from "@nextui-org/react";
import { title } from "@Ï/components/primitives";

export default function AlgorithmPage() {
return (
<div className='grid-cols-1'>

<div className='grid-rows'>
<p className="text-4xl justify-center my-40 py-0.5" css={{ fontFamily: "Poppins" }}>
About Us
</p>
</div>

<div className='grid-rows'>
<p className="justify-center font-normal text-lg text-justify" css={{ fontFamily: "Poppins" }}>
<b> Adpative Yielding Response Engine (AYRE) </b> which is not only semantically attentive but also emotionally aware.
The engine is a Semantic-Attentive Visual Question Answering system with the capability of generating answers to questions based on the content of the image and the mood it portrays.
</p> <Spacer y={50} />

<p className="text-2xl justify-center font-normal text-justify" css={{ fontFamily: "Poppins" }}>
Problem Statement
</p> <br />

<p className="justify-center font-normal text-lg text-justify" css={{ fontFamily: "Poppins" }}>
Given an image, can our machine answer users’ questions in natural language?
If so, can it frame answers in a way that captures any abstract semantics behind just the image or the question?
</p> <Spacer y={50} />

<p className="text-2xl justify-center font-normal text-justify" css={{ fontFamily: "Poppins" }}>
Methodology
</p> <Spacer y={50} />

<Image
alt="NextUI hero Image"
src="AYRE.png"
/> <Spacer y={50} />

<p className="justify-center font-normal text-lg text-justify" css={{ fontFamily: "Poppins" }}>
A modified vision transformer along with another modified U-NET model made into a seamless pipeline that will perform tasks of semantic segmentation and VQA in hopes that the segmented analysis is able to outperform SOTA methods.
</p> <br />

<p className="justify-center font-normal text-lg text-justify" css={{ fontFamily: "Poppins" }}>
A modified U-Net segmentation model provides semantic information about each pixel in the image.
These color segments, as latent tensors, provide the value matrix for attention masks.
The segments can also be annotated on the fly to provide context to the user.
</p> <br />

<Image
alt="NextUI hero Image"
src="Segmentation.png"
isBlurred
/> <Spacer y={50} />

<p className="justify-center font-normal text-lg text-justify" css={{ fontFamily: "Poppins" }}>
Dual stream transformer Vision Language Processing model that uses self-attention on both visual and text features to maximize info-gain. The novelty lies in the attention heads; these are generated using image segmentation to ensure every label gets its own attention
</p> <br />

<Image
alt="NextUI hero Image"
src="Dual stream transofrmer.png"
className=""
isBlurred
/> <Spacer y={50} />

<p className="justify-center font-normal text-lg text-justify" css={{ fontFamily: "Poppins" }}>
The AYRE sentiment analysis module is designed to interpret an image's emotional undertones through color analysis. The system maps dominant colors extracted from the image to corresponding emotions using color psychology. This process begins loading the image into the system, which imports the image and converts it to RGB format for accurate color representation.
</p> <br />

<Image
alt="NextUI hero Image"
src="Plutchik-wheel.png"
isBlurred
/> <Spacer y={50} />
</div>
</div >
);
}
File renamed without changes.
File renamed without changes.
49 changes: 49 additions & 0 deletions app/page.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
'use client';

// Import necessary modules
import { useState } from 'react';
import { Divider, Spacer } from '@nextui-org/react';
import { SketchfabModel } from '../components/SketchfabModel';
import { AnswerCard } from '@/components/answer-card';
import DragDrop from '@/components/drap-drop';
import axios from 'axios'; // Add this import

const responseData = {
// image: "https://example.com/image.jpg",
// description: "This is the description of the answer.",
};

export default function Home() {
const [predictionData, setPredictionData] = useState(responseData);

const handlePrediction = (data) => {
// Check if the image URL is present in the response data
if (data.image) {
// If the image is present, set the image URL directly
data.image = URL.createObjectURL(data.image);
}
setPredictionData(data);
};


return (
<section className="grid justify-items-center">
<div className=" grid grid-flow-col items-center gap-10 p-6 pr-40">
<div className="grid justify-items-center max-w-full">
<SketchfabModel />
</div>

<div className="">
<AnswerCard responseData={predictionData} />
</div>
</div>

<Divider />
<Spacer y={2} />

<div className="grid items-center w-full h-fit">
<DragDrop onPrediction={handlePrediction} />
</div>
</section>
);
}
File renamed without changes.
2 changes: 2 additions & 0 deletions app/response/new/page.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
//in case a new page is required to be created after submission

Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ function MeshComponent() {
const gltf = useLoader(GLTFLoader, fileUrl);

useFrame(() => {
mesh.current.rotation.y += 0.01;
mesh.current.rotation.y += 0.005;
});


Expand Down
66 changes: 66 additions & 0 deletions components/answer-card.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
'use client';

// Import necessary modules
import React, { useEffect, useState } from 'react';
import { Card, CardHeader, CardBody, CardFooter, Divider, Image } from '@nextui-org/react';

export const AnswerCard = ({ responseData }) => {
const [displayedData, setDisplayedData] = useState(responseData);

useEffect(() => {
console.log('AnswerCard received data:', responseData);
setDisplayedData(responseData);
}, [responseData]);

useEffect(() => {
console.log('Updating displayed data:', displayedData);
}, [displayedData]);

console.log('Image type:', typeof displayedData.image);

return (
<Card className="w-full h-full bg-transparent" css={{ bgBlur: '#0f111466' }}>
<CardHeader className="flex gap-3">
<div className="flex flex-col">
<p className="text-md"> Ayre's response... </p>
</div>
</CardHeader>

<Divider />

<CardBody>
{displayedData.image ? (
// Render the image directly if it's a URL
typeof displayedData.image === 'string' ? (
<Image
src={displayedData.image}
alt="Image of the answer"
width="500px"
height="auto"
className="rounded-large justify-items-center"
/>
) : displayedData.image instanceof Blob ? (
// Use Blob URL directly
<Image
src={URL.createObjectURL(displayedData.image)}
alt="Image of the answer"
width="500px"
height="auto"
className="rounded-large justify-items-center"
/>
) : (
<p>Invalid image format</p>
)
) : (
<p>No image available</p>
)}
</CardBody>

<Divider />

<CardFooter>
<p>{displayedData.prediction || 'Lorem ipsum dolor sit amet...'}</p>
</CardFooter>
</Card>
);
};
88 changes: 88 additions & 0 deletions components/drap-drop.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
'use client';

// Import necessary modules
import React, { useState } from 'react';
import { FileUploader } from 'react-drag-drop-files';
import { Button } from '@nextui-org/button';
import { Spacer, Textarea } from '@nextui-org/react';
import axios from 'axios'; // Add this import

const fileTypes = ['jpg', 'jpeg', 'png'];

function DragDrop({ onPrediction }) {
const [file, setFile] = useState(null);
const [description, setDescription] = useState('');

const handleChange = (file) => {
setFile(file);
};

const handleDescriptionChange = (event) => {
setDescription(event.target.value);
};

const handleButtonClick = async (event) => {
event.preventDefault();

if (!file || !description) {
return;
}

const formData = new FormData();
formData.append('image', file);
formData.append('query', description);

try {
const response = await axios.post('https://singularly-inviting-cat.ngrok-free.app/predict/', formData);

if (response.status === 200) {
const responseData = {
prediction: response.data.prediction,
time: response.data.time,
image: file, // Pass the File object directly
};

console.log('Prediction data:', responseData);
onPrediction(responseData);
} else {
console.error('Error submitting prediction data:', response.status, response.statusText);
}
} catch (error) {
console.error('Error sending POST request:', error);
}
};

return (
<form className="flex w-full items-center">
<Textarea
className="font-size-md"
fullWidth
minRows={5}
labelPlacement="outside"
placeholder="Ask Ayre a question..."
value={description}
onChange={handleDescriptionChange}
/>

<Spacer x={4} />

<div className="rounded-2xl bg-[#191717] flex flex-col items-center justify-center w-fit">
<FileUploader handleChange={handleChange} name="file" types={fileTypes} />
</div>

<Spacer x={2} />

<Button
type="submit"
color="danger"
variant="shadow"
className="w-fit"
onClick={handleButtonClick}
>
Submit
</Button>
</form>
);
}

export default DragDrop;
Loading

0 comments on commit 5a8564d

Please sign in to comment.