This project presents a Python implementation of the gradient descent algorithm for multi-linear regression. Designed to handle problems with 'r' predictors, it allows customization of the learning rate (η) and the number of iteration steps. The implementation is tested on two datasets: advertising.csv and auto.csv, using a suitable train-test split to evaluate the model's performance.
- Implements gradient descent for multi-linear regression problems.
- Customizable learning rate and iteration steps.
- Evaluation using cost function and R-squared test.
- Tested on real-world datasets (
advertising.csvandauto.csv). - Visualization tools for analyzing regression results.
- Python environment
- Libraries: NumPy, Matplotlib, Seaborn (optional), Scikit-learn.
Clone the repository to your local machine:
git clone [repository-url]- Navigate to the project directory.
- Open the provided Jupyter notebooks (
main_advertising.ipynbandmain_auto.ipynb) to see the implementation on the respective datasets. - Modify the parameters (learning rate, iterations, test size) in the
Modelclass instantiation as needed.
Two Jupyter notebooks are provided:
advertising_analysis.ipynbfor theadvertising.csvdataset.auto_analysis.ipynbfor theauto.csvdataset.
These notebooks guide you through the process of loading the data, creating an instance of the Model class, running the regression analysis, and visualizing the results.