Posts

Showing posts from September, 2025

Describing dataset

  1. Sample The data I used comes from the Gapminder dataset , which compiles publicly available data from organizations such as the United Nations, World Bank, and World Health Organization. The dataset includes information on over 200 countries and regions, covering social, economic, health, and environmental indicators. For this project, my sample focused on countries with available values for income per person, internet use rate, and life expectancy , resulting in approximately 180 valid observations after removing cases with missing values. 2. Data Collection Procedure Gapminder itself does not conduct primary surveys but instead compiles and harmonizes data from authoritative international sources: Income per person (GDP per capita) : World Bank and national accounts. Internet use rate (% of population) : International Telecommunication Union (ITU). Life expectancy (years) : United Nations Population Division and WHO. These data are updated regularly and stand...

Moderation

Image
 1. The program code import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf # Load dataset df = pd.read_csv("gapminder.csv") # Convert to numeric df["incomeperperson"] = pd.to_numeric(df["incomeperperson"], errors="coerce") df["internetuserate"] = pd.to_numeric(df["internetuserate"], errors="coerce") df["lifeexpectancy"] = pd.to_numeric(df["lifeexpectancy"], errors="coerce") # Create categorical moderator: Life Expectancy Groups df["lifeexp_group"] = pd.cut(df["lifeexpectancy"],                              bins=[0, 60, 75, 90],                              labels=["Low", "Medium", "High"]) # Drop missing values df_clean = df.dropna(subset=["incomeperperson", "internetuserate", "lifeexp_group"]) # Moderation model with interaction term model = smf.ols("internetuserate ~ in...

Pearson Correlation

Image
 1. The program code import pandas as pd from scipy.stats import pearsonr # Load dataset df = pd.read_csv("gapminder.csv") # Convert variables to numeric df["incomeperperson"] = pd.to_numeric(df["incomeperperson"], errors="coerce") df["internetuserate"] = pd.to_numeric(df["internetuserate"], errors="coerce") # Drop missing values df_clean = df.dropna(subset=["incomeperperson", "internetuserate"]) # Calculate Pearson correlation r, p = pearsonr(df_clean["incomeperperson"], df_clean["internetuserate"]) print("Correlation Coefficient (r):", r) print("p-value:", p) print("R-squared:", r**2) 2. Output 3. Interpretation  The Pearson correlation between income per person and internet use rate was r = 0.75, p < .0001 . This indicates a strong positive linear relationship : as countries’ income per person increases, their internet usage rate also tends...

Chi_square results

Image
 1. Program code import pandas as pd from scipy.stats import chi2_contingency # Load dataset df = pd.read_csv("gapminder.csv") # Convert variables to numeric df["incomeperperson"] = pd.to_numeric(df["incomeperperson"], errors="coerce") df["internetuserate"] = pd.to_numeric(df["internetuserate"], errors="coerce") # Create categorical groups df["income_group"] = pd.cut(df["incomeperperson"],                             bins=[0, 5000, 20000, 100000],                             labels=["Low Income", "Middle Income", "High Income"]) df["internet_group"] = pd.cut(df["internetuserate"],                               bins=[0, 30, 70, 100],                               labels=["Low Internet Use", "Medium...

Python project 5

Image
 1. Python code import pandas as pd import statsmodels.api as sm from statsmodels.formula.api import ols from statsmodels.stats.multicomp import pairwise_tukeyhsd # Load dataset df = pd.read_csv("gapminder.csv") # Convert variables to numeric df["incomeperperson"] = pd.to_numeric(df["incomeperperson"], errors="coerce") df["internetuserate"] = pd.to_numeric(df["internetuserate"], errors="coerce") # Create categorical income groups df["income_group"] = pd.cut(df["incomeperperson"],                             bins=[0, 5000, 20000, 100000],                             labels=["Low Income", "Middle Income", "High Income"]) # Drop missing values df_clean = df.dropna(subset=["income_group", "internetuserate"]) # --- Run ANOVA --- model = ols("internetuserate ~ C(income_group)", data=df_clean).fit() anova_table = sm.stats.anova_lm(model, typ=2) ...

Python project 3

Image
   1. The Script import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Load dataset df = pd.read_csv("gapminder.csv") # Convert variables to numeric vars_of_interest = ["incomeperperson", "internetuserate", "lifeexpectancy"] df[vars_of_interest] = df[vars_of_interest].apply(pd.to_numeric, errors="coerce") # Data management: group variables df["income_group"] = pd.cut(df["incomeperperson"],                             bins=[0, 5000, 20000, 100000],                             labels=["Low Income", "Middle Income", "High Income"]) df["internet_group"] = pd.cut(df["internetuserate"],                               bins=[0, 30, 70, 100],                               labels=[...

Python project 2

  1. The Script import pandas as pd # Load dataset df = pd.read_csv("gapminder.csv") # Select variables vars_of_interest = ["incomeperperson", "internetuserate", "lifeexpectancy"] # Convert to numeric, handle missing/invalid df[vars_of_interest] = df[vars_of_interest].apply(pd.to_numeric, errors="coerce") # --- Data Management Decisions --- # 1. Drop rows where all three variables are missing df = df.dropna(subset=vars_of_interest, how="all") # 2. Create categorical bins df["income_group"] = pd.cut(df["incomeperperson"],                             bins=[0, 5000, 20000, 100000],                             labels=["Low Income", "Middle Income", "High Income"]) df["internet_group"] = pd.cut(df["internetuserate"],                               bins=[0, 30, 70, 100],  ...

Python project

1. The Script import pandas as pd # Load the dataset df = pd.read_csv("gapminder.csv") # Select the variables of interest vars_of_interest = ["incomeperperson", "internetuserate", "lifeexpectancy"] # Convert columns to numeric (errors='coerce' turns bad data into NaN) df[vars_of_interest] = df[vars_of_interest].apply(pd.to_numeric, errors="coerce") # Run frequency distributions (value counts with bins) freq_income = pd.cut(df["incomeperperson"], bins=5).value_counts().sort_index() freq_internet = pd.cut(df["internetuserate"], bins=5).value_counts().sort_index() freq_lifeexp = pd.cut(df["lifeexpectancy"], bins=5).value_counts().sort_index() # Display results print("Frequency Distribution: Income per Person") print(freq_income) print("\nFrequency Distribution: Internet Use Rate") print(freq_internet) print("\nFrequency Distribution: Life Expectancy") print(freq_lifeexp) 2...