Skip to content

Commit

Permalink
Deployment
Browse files Browse the repository at this point in the history
  • Loading branch information
Ntezi committed Jul 6, 2024
1 parent 59c9826 commit e1dad7f
Showing 1 changed file with 204 additions and 21 deletions.
225 changes: 204 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,213 @@
# React + TypeScript + Vite
# Ikizamini

This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
Welcome to the Ikizamini application, a comprehensive tool to help you practice and prepare for the theory exam for your driving test.

Currently, two official plugins are available:
## Table of Contents

- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react/README.md) uses [Babel](https://babeljs.io/) for Fast Refresh
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
- [Features](#features)
- [Technologies Used](#technologies-used)
- [Contributing](#contributing)
- [License](#license)
- [Acknowledgements](#acknowledgements)
- [Generating Questions](#generating-questions)

## Expanding the ESLint configuration
## Features

If you are developing a production application, we recommend updating the configuration to enable type aware lint rules:
- Interactive quiz to test your knowledge of driving rules, regulations, and safety.
- Real-time feedback on your answers.
- Timer to simulate the actual exam conditions.
- Detailed results page showing correct and incorrect answers.

- Configure the top-level `parserOptions` property like this:
## Technologies Used

```js
export default {
// other rules...
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module',
project: ['./tsconfig.json', './tsconfig.node.json'],
tsconfigRootDir: __dirname,
},
}
- **React**: A JavaScript library for building user interfaces.
- **Vite**: A build tool that provides a fast development environment.
- **TypeScript**: A typed superset of JavaScript that compiles to plain JavaScript.
- **Firebase Hosting**: Fast and secure hosting for web applications.
- **Tailwind CSS**: A utility-first CSS framework for rapid UI development.
- **React Router**: Declarative routing for React applications.
- **Redux Toolkit**: The official, recommended way to write Redux logic.

## Contributing

Contributions are welcome! Please follow these steps to contribute:

1. Fork the repository.
2. Create a new branch (`git checkout -b feature-branch`).
3. Make your changes.
4. Commit your changes (`git commit -m 'Add new feature'`).
5. Push to the branch (`git push origin feature-branch`).
6. Open a Pull Request.


## Acknowledgements

- Developed by [Marius Ngaboyamahina](https://www.linkedin.com/in/ntezi/).

## Generating Questions

This application uses questions generated by crawling the Rwanda Traffic Guide website. Here is how the questions were generated:

### Web Crawler for Rwanda Traffic Guide Questions

This script crawls pages from the Rwanda Traffic Guide website, extracts questions, options, correct answers, and associated images, then saves this data in a JSON file.

### Requirements

Ensure you have the following libraries installed:

- `requests`
- `beautifulsoup4`
- `lxml`

### Script Overview

#### Variables and Setup

1. **Base URL and Starting URL**:
- The base URL and the starting URL for the crawling process are defined.
```python
base_url = "https://rwandatrafficguide.com/"
start_url = "https://rwandatrafficguide.com/rtg001-ikinyabiziga-cyose-cyangwa-ibinyabiziga-bigomba-kugira/"
```

2. **Directory to Save Images**:
- Creates a directory to save downloaded images.
```python
os.makedirs("downloaded_images", exist_ok=True)
```

3. **Custom Headers**:
- Custom headers are defined to mimic a browser request and avoid being blocked by the website.
```python
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
```

4. **Data Storage**:
- An empty list is created to store the extracted data.
```python
data = []
```

#### Functions

1. **Extract ID from URL**:
- Extracts the numerical ID from the URL using a regular expression.
```python
def extract_id_from_url(url):
match = re.search(r'rtg(\d+)', url)
return int(match.group(1)) if match else None
```

2. **Fetch Page**:
- Fetches the HTML content of the page.
```python
def fetch_page(url):
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.text
else:
print(f"Failed to retrieve the page. Status code: {response.status_code}")
return None
```

3. **Parse Page**:
- Parses the HTML content to extract the question, options, correct answer, and image.
```python
def parse_page(html, url):
soup = BeautifulSoup(html, 'html.parser')
entry_content = soup.find('div', class_='entry-content clr')
if not entry_content:
return None

question = entry_content.find('p', class_='question').text.strip() if entry_content.find('p', class_='question') else ""
options_list = entry_content.find('ul', class_='list')
options = {}
answer = ""

if options_list:
for idx, li in enumerate(options_list.find_all('li'), start=1):
option_text = li.text.strip()
option_key = chr(96 + idx) # 'a', 'b', 'c', 'd'
options[option_key] = option_text
if li.find('strong', class_='colored'):
answer = option_key

image_url = ""
image_tag = entry_content.find('figure', class_='wp-block-image')
if image_tag and image_tag.find('img'):
img_src = image_tag.find('img')['src']
img_name = os.path.basename(img_src)
img_response = requests.get(img_src, headers=headers)
with open(os.path.join("downloaded_images", img_name), 'wb') as f:
f.write(img_response.content)
image_url = img_name

question_id = extract_id_from_url(url)

data.append({
"id": question_id,
"question": question,
"image": image_url,
"options": options,
"answer": answer
})

next_page_tag = soup.find('div', class_='nav-next')
next_page_url = next_page_tag.a['href'] if next_page_tag and next_page_tag.a else None

return next_page_url
```

### Main Logic

The script starts crawling from the initial URL and continues to the next page until no further pages are found.
```python
url = start_url
while url:
html = fetch_page(url)
if html:
url = parse_page(html, url)
else:
break
```

### Save Data to JSON

After crawling, the script saves the extracted data in a JSON file.
```python
with open('questions.json', 'w', encoding='utf-8') as json_file:
json.dump(data, json_file, ensure_ascii=False, indent=4)

print("Crawling completed and data saved to questions.json")
```

- Replace `plugin:@typescript-eslint/recommended` to `plugin:@typescript-eslint/recommended-type-checked` or `plugin:@typescript-eslint/strict-type-checked`
- Optionally add `plugin:@typescript-eslint/stylistic-type-checked`
- Install [eslint-plugin-react](https://github.com/jsx-eslint/eslint-plugin-react) and add `plugin:react/recommended` & `plugin:react/jsx-runtime` to the `extends` list
### Running the Script

1. Ensure you have the required libraries installed.
2. Save the script to a Python file, e.g., `crawl_questions.py`.
3. Run the script:
```sh
python crawl_questions.py
```

The script will create a directory named `downloaded_images` to save any images it downloads. It will also create a JSON file named `questions.json` containing the crawled data.

### Example JSON Output

```json
[
{
"id": 22,
"question": "Itara ryo guhagarara ry’ibara ritukura rigomba kugaragara igihe ijuru rikeye nibura mu ntera ikurikira",
"image": "RTGQ398-Ibibazo-Nibisubizo-Byamategeko-Yumuhanda-Rwanda-Traffic-Guide-Com-ni-ikihe-icyapa-gisobanura-umuhanda-w-icyerekezo-kimwe-icyapa-e-a.jpg",
"options": {
"a": "Metero 100 ku manywa na metero 20 mu ijoro",
"b": "Metero 150 ku manywa na metero50 mu ijoro",
"c": "Metero 200 ku manywa na metero100 mu ijoro",
"d": "Nta gisubizo cy’ukuri kirimo"
},
"answer": "d"

0 comments on commit e1dad7f

Please sign in to comment.