diff --git a/02_activities/assignments/DC_Cohort/.Rhistory b/02_activities/assignments/DC_Cohort/.Rhistory new file mode 100644 index 000000000..e69de29bb diff --git a/02_activities/assignments/DC_Cohort/Assignment1.md b/02_activities/assignments/DC_Cohort/Assignment1.md index f78778f5b..5db1ee553 100644 --- a/02_activities/assignments/DC_Cohort/Assignment1.md +++ b/02_activities/assignments/DC_Cohort/Assignment1.md @@ -205,5 +205,12 @@ Consider, for example, concepts of fariness, inequality, social structures, marg ``` -Your thoughts... + +In our daily lives we interact with databases and data systems that may seem neutral, but embed value systems shaped by the broader social structures in which they are created and deployed. When a system records only “standard” categories (e.g., male/female gender, certain ethnic groups, binary race options) or uses algorithms that assume typical behaviour as baseline, it privileges dominant populations and implicitly marginalizes those who don’t fit. Technology then becomes a mechanism through which existing social inequalities or power structures are reproduced. +The article reminded me that archival and data systems are not passive storehouses of information but they reflect decisions about what is collected, how it is structured, what is discarded or rendered invisible, and what relationships are acknowledged between data points. The act of creating and using a data system is a social act of selecting reality. Those selections echo values such as fairness (or its absence), equality (or neglect of difference), assumptions about normalcy, and the intersection of technology with society’s hierarchies and exclusions. +For instance, data systems may privilege quantifiable, easily measurable traits over more nuanced social realities (such as marginalization, intersectionality, lived experience). People who fall into multiple disadvantaged categories (race + gender + disability) may not be adequately represented if the system is designed only with “single-axis” categories. That omission encodes structural bias into the system. Further, when such systems are used in decision-making (for funding, for access, for monitoring), the embedded values have real impacts such as reinforcing marginalization, excluding certain groups from benefits, or misrepresenting experiences of inequality. +At the same time, technology and society co-construct one another. Data infrastructures aren’t merely technical A they carry the imprint of their designers’ assumptions, institutional logics, historical contexts. The design of a database including what fields are included, how privacy is handled, who has access, reflects social structures (who holds power, whose voices matter). And as society changes, these systems either adapt or become rigid. For example, if a system was built with a narrow conception of “user” based on a dominant group, it might fail to serve people from marginalized communities, thereby reproducing structural inequities in digital form. +In my own routines (as a student, teaching assistant, researcher) I interact with university information systems, learning-management platforms, research databases, etc. These environments embody value systems: ranking, assessing, benchmarking, tracking performance. They often focus on the measurable (grades, attendance, submission times) rather than the qualitative, relational, or structural factors that affect success (such as access, mentorship, socio-economic inequities, systemic bias). Therefore, they reflect wider societal structures of competition, credentialism, standardisation which may advantage those already positioned favourably and disadvantage those without resources or who don’t conform to the “norm”. +Recognising this helps me to reflect on how I might engage in data systems more critically. It invites awareness that technology isn’t neutral, but an active layer in the construction of social realities. Accordingly, when we design, use, or critique data systems, we must attend not only to technical correctness, but to the values embedded within. + ``` diff --git a/02_activities/assignments/DC_Cohort/Assignment2.md b/02_activities/assignments/DC_Cohort/Assignment2.md index 9b804e9ee..667fa31a1 100644 --- a/02_activities/assignments/DC_Cohort/Assignment2.md +++ b/02_activities/assignments/DC_Cohort/Assignment2.md @@ -54,7 +54,7 @@ The store wants to keep customer addresses. Propose two architectures for the CU **HINT:** search type 1 vs type 2 slowly changing dimensions. ``` -Your answer... +The one that will overwrite is Type 1 and the one that retains changes is Type 2. ``` *** @@ -183,5 +183,13 @@ Consider, for example, concepts of labour, bias, LLM proliferation, moderating c ``` -Your thoughts... + +Reading the article reveals how deeply human judgment is woven into systems that are often framed as purely computational. What appears to be an automated network of mathematical layers is actually built on top of human labour, subjective decisions, and inherited social structures. The ethical implications become visible once the full chain of human involvement is acknowledged. +The article emphasizes the labour-intensive collection and labeling of training data. These choices unfold within specific social contexts. When the data depends on contractors working across different countries and conditions, guided by shifting instructions and inconsistent interpretations, the model ends up reflecting these uneven and sometimes contradictory judgments. The ethical challenge arises from the fact that these influences remain invisible in the final outputs, even though they shape them profoundly. +The article also illustrates how routine engineering practices, cleaning datasets, tuning model architecture, deciding which edge cases to keep or discard, are not neutral technical steps. For example, choosing which misclassifications matter enough to fix or which accents to normalize embeds priorities and worldview directly into the system. What the engineer dismisses as “noise” may represent someone’s dialect, culture, or identity. These small, everyday decisions accumulate into large-scale patterns once deployed, where biases that began as minor human choices become institutionalized by the scale and authority of machine learning. +Another ethical concern in the article is how quickly experimental models become infrastructural. When neural networks are integrated into tools that determine what content is surfaced, what speech is filtered, or how individuals are categorized, they begin shaping societal norms rather than simply reflecting them. A system trained on data filtered by particular people ends up implicitly defining what types of communication are acceptable or which expressions are recognizable. Those whose patterns were underrepresented or misinterpreted in the training process are more likely to be misclassified or silenced, reinforcing existing forms of marginalization. +The article also highlights the stark divide between those who build these systems and those who must live within them. Engineers can refine models, debate architectures, or discard entire approaches without consequence, while the model’s outputs influence people who had no participation in the system’s creation. This asymmetry raises ethical questions about consent, accountability, and control. Decisions that seem trivial within a development pipeline may carry significant consequences for individuals evaluated by the final system. +Ultimately, the article makes clear that the ethical issues surrounding neural networks are not speculative, they stem directly from the human choices, inconsistencies, and social dynamics embedded in every stage of development. Recognizing that these systems are built on imperfect human scaffolding demands a more honest approach to their design: one that confronts the biases, power imbalances, and societal impacts inherent in the work, and aims to build technology that better accounts for the full diversity of those affected by it. + + ``` diff --git a/02_activities/assignments/DC_Cohort/DSI_SQL_LOGICAL DATA MODEL.pdf b/02_activities/assignments/DC_Cohort/DSI_SQL_LOGICAL DATA MODEL.pdf new file mode 100644 index 000000000..367cbffd4 Binary files /dev/null and b/02_activities/assignments/DC_Cohort/DSI_SQL_LOGICAL DATA MODEL.pdf differ diff --git a/02_activities/assignments/DC_Cohort/SQL_Assign2_Logic Model.pdf b/02_activities/assignments/DC_Cohort/SQL_Assign2_Logic Model.pdf new file mode 100644 index 000000000..f0a78fff8 Binary files /dev/null and b/02_activities/assignments/DC_Cohort/SQL_Assign2_Logic Model.pdf differ diff --git a/02_activities/assignments/DC_Cohort/assignment1.sql b/02_activities/assignments/DC_Cohort/assignment1.sql index c992e3205..9db669cde 100644 --- a/02_activities/assignments/DC_Cohort/assignment1.sql +++ b/02_activities/assignments/DC_Cohort/assignment1.sql @@ -1,20 +1,29 @@ /* ASSIGNMENT 1 */ /* SECTION 2 */ +/* Ayesha Rashidi */ --SELECT /* 1. Write a query that returns everything in the customer table. */ - +SELECT * +FROM customer; /* 2. Write a query that displays all of the columns and 10 rows from the cus- tomer table, sorted by customer_last_name, then customer_first_ name. */ +SELECT * +FROM customer +ORDER BY customer_last_name, customer_first_name +LIMIT 10; --WHERE /* 1. Write a query that returns all customer purchases of product IDs 4 and 9. */ +SELECT * +FROM customer_purchases +WHERE product_id IN (4, 9); /*2. Write a query that returns all customer purchases and a new calculated column 'price' (quantity * cost_to_customer_per_qty), @@ -24,10 +33,17 @@ filtered by customer IDs between 8 and 10 (inclusive) using either: */ -- option 1 +SELECT *, + (quantity * cost_to_customer_per_qty) AS price +FROM customer_purchases +WHERE customer_id >= 8 AND customer_id <= 10; -- option 2 - +SELECT *, + (quantity * cost_to_customer_per_qty) AS price +FROM customer_purchases +WHERE customer_id BETWEEN 8 AND 10; --CASE /* 1. Products can be sold by the individual unit or by bulk measures like lbs. or oz. @@ -35,20 +51,42 @@ Using the product table, write a query that outputs the product_id and product_n columns and add a column called prod_qty_type_condensed that displays the word “unit” if the product_qty_type is “unit,” and otherwise displays the word “bulk.” */ +SELECT product_id, + product_name, + CASE + WHEN product_qty_type = 'unit' + THEN 'unit' + ELSE 'bulk' + END AS prod_qty_type_condensed +FROM product; /* 2. We want to flag all of the different types of pepper products that are sold at the market. add a column to the previous query called pepper_flag that outputs a 1 if the product_name contains the word “pepper” (regardless of capitalization), and otherwise outputs 0. */ - +SELECT product_id, + product_name, + CASE + WHEN product_qty_type = 'unit' + THEN 'unit' + ELSE 'bulk' + END AS prod_qty_type_condensed, + CASE + WHEN LOWER(product_name) LIKE '%pepper%' THEN 1 + ELSE 0 + END AS pepper_flag +FROM product; --JOIN /* 1. Write a query that INNER JOINs the vendor table to the vendor_booth_assignments table on the vendor_id field they both have in common, and sorts the result by vendor_name, then market_date. */ - - +SELECT * +FROM vendor v +INNER JOIN vendor_booth_assignments vba + ON v.vendor_id = vba.vendor_id +ORDER BY v.vendor_name, vba.market_date; /* SECTION 3 */ @@ -64,7 +102,16 @@ of customers for them to give stickers to, sorted by last name, then first name. HINT: This query requires you to join two tables, use an aggregate function, and use the HAVING keyword. */ - +SELECT c.customer_id, + c.customer_first_name, + c.customer_last_name, + SUM(cp.quantity*cp.cost_to_customer_per_qty) AS total_spent +FROM customer_purchases as cp +LEFT JOIN customer AS c -- In case there are customer IDs with no names in customer_purchases. +ON cp.customer_id = c.customer_id +GROUP BY cp.customer_id, c.customer_first_name, c.customer_last_name +HAVING total_spent > 2000 +ORDER BY c.customer_last_name,c.customer_first_name; --Temp Table /* 1. Insert the original vendor table into a temp.new_vendor and then add a 10th vendor: @@ -78,6 +125,26 @@ When inserting the new vendor, you need to appropriately align the columns to be VALUES(col1,col2,col3,col4,col5) */ +DROP TABLE IF EXISTS temp.new_vendor; -- If it previously existed, delete it. +CREATE TABLE temp.new_vendor AS +SELECT * +FROM vendor; +INSERT INTO temp.new_vendor ( + vendor_id, + vendor_name, + vendor_type, + vendor_owner_first_name, + vendor_owner_last_name +) +VALUES ( + 10, + 'Thomas Superfood Store', + 'Fresh Focused', + 'Thomas', + 'Rosenthal' +); +SELECT * +FROM temp.new_vendor; -- Date diff --git a/02_activities/assignments/DC_Cohort/assignment2.sql b/02_activities/assignments/DC_Cohort/assignment2.sql index 5ad40748a..d57dc2499 100644 --- a/02_activities/assignments/DC_Cohort/assignment2.sql +++ b/02_activities/assignments/DC_Cohort/assignment2.sql @@ -1,5 +1,6 @@ /* ASSIGNMENT 2 */ /* SECTION 2 */ +/* Ayesha Rashidi */ -- COALESCE /* 1. Our favourite manager wants a detailed long list of products, but is afraid of tables! @@ -20,7 +21,8 @@ The `||` values concatenate the columns into strings. Edit the appropriate columns -- you're making two edits -- and the NULL rows will be fixed. All the other rows will remain the same.) */ - +SELECT product_name || ', ' || COALESCE(product_size,'') || ' (' || COALESCE(product_qty_type,'unit') || ')' AS product_list +FROM product; --Windowed Functions /* 1. Write a query that selects from the customer_purchases table and numbers each customer’s @@ -32,18 +34,29 @@ each new market date for each customer, or select only the unique market dates p (without purchase details) and number those visits. HINT: One of these approaches uses ROW_NUMBER() and one uses DENSE_RANK(). */ - +SELECT *, +DENSE_RANK() OVER(PARTITION BY customer_id ORDER BY market_date ASC) as visit_number +FROM customer_purchases; /* 2. Reverse the numbering of the query from a part so each customer’s most recent visit is labeled 1, then write another query that uses this one as a subquery (or temp table) and filters the results to only the customer’s most recent visit. */ - +SELECT * +FROM + ( + SELECT *, + DENSE_RANK() OVER(PARTITION BY customer_id ORDER BY market_date DESC) as visit_number + FROM customer_purchases + ) AS sub +WHERE visit_number = 1; /* 3. Using a COUNT() window function, include a value along with each row of the customer_purchases table that indicates how many different times that customer has purchased that product_id. */ - +SELECT *, +COUNT() OVER(PARTITION BY customer_id,product_id ORDER BY customer_id,product_id ASC) as times_purchased +FROM customer_purchases; -- String manipulations /* 1. Some product names in the product table have descriptions like "Jar" or "Organic". @@ -57,11 +70,24 @@ Remove any trailing or leading whitespaces. Don't just use a case statement for Hint: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR will help split the column. */ - +SELECT product_name, +CASE + WHEN INSTR(product_name,'-') != 0 + THEN LTRIM(RTRIM(SUBSTR(product_name,INSTR(product_name,'-')+2))) + ELSE NULL +END as description +FROM product; /* 2. Filter the query to show any product_size value that contain a number with REGEXP. */ - +SELECT product_name, +CASE + WHEN INSTR(product_name,'-') != 0 + THEN LTRIM(RTRIM(SUBSTR(product_name,INSTR(product_name,'-')+2))) + ELSE NULL +END as description +FROM product +WHERE product_size REGEXP '[0-9]'; -- UNION /* 1. Using a UNION, write a query that displays the market dates with the highest and lowest total sales. @@ -73,8 +99,23 @@ HINT: There are a possibly a few ways to do this query, but if you're struggling 3) Query the second temp table twice, once for the best day, once for the worst day, with a UNION binding them. */ - - +WITH sales_per_date AS ( + SELECT *, SUM(quantity*cost_to_customer_per_qty) as sales + FROM customer_purchases + GROUP BY market_date +), ranked_sales AS ( + SELECT *, + RANK() OVER(ORDER BY sales DESC) as asc_rank, + RANK() OVER(ORDER BY sales ASC) as desc_rank + FROM sales_per_date +) +SELECT market_date, sales, 'Best Day' AS day_type +FROM ranked_sales +WHERE asc_rank = 1 +UNION +SELECT market_date, sales, 'Worst Day' AS day_type +FROM ranked_sales +WHERE desc_rank = 1; /* SECTION 3 */ @@ -89,7 +130,26 @@ Think a bit about the row counts: how many distinct vendors, product names are t How many customers are there (y). Before your final group by you should have the product of those two queries (x*y). */ - +WITH named_table AS ( + SELECT * + FROM vendor_inventory as vi + LEFT JOIN vendor as v + ON vi.vendor_id = v.vendor_id + LEFT JOIN product as p + ON vi.product_id = p.product_id +), distinct_table AS ( + SELECT vendor_name,product_name, + AVG(original_price) as avg_price -- Same price for everything, this is just in case it differs that there's an average profit. + FROM named_table + GROUP BY vendor_name,product_name +), customer_table AS ( + SELECT * + FROM distinct_table + CROSS JOIN customer +) +SELECT vendor_name,product_name,SUM(avg_price*5) as money_made +FROM customer_table +GROUP BY vendor_name,product_name; -- INSERT /*1. Create a new table "product_units". @@ -97,19 +157,36 @@ This table will contain only products where the `product_qty_type = 'unit'`. It should use all of the columns from the product table, as well as a new column for the `CURRENT_TIMESTAMP`. Name the timestamp column `snapshot_timestamp`. */ - +DROP TABLE IF EXISTS temp.product_units; +CREATE TEMP TABLE temp.product_units AS + SELECT *, CURRENT_TIMESTAMP as snapshot_timestamp + FROM product + WHERE product_qty_type = 'unit'; +SELECT * +FROM temp.product_units; /*2. Using `INSERT`, add a new row to the product_units table (with an updated timestamp). This can be any product you desire (e.g. add another record for Apple Pie). */ - +INSERT INTO temp.product_units +VALUES(7,'Apple Pie','10"',3,'unit',CURRENT_TIMESTAMP); +SELECT * +FROM temp.product_units; -- DELETE /* 1. Delete the older record for the whatever product you added. HINT: If you don't specify a WHERE clause, you are going to have a bad time.*/ - +DELETE FROM temp.product_units +WHERE product_name = 'Apple Pie' +AND snapshot_timestamp = ( + SELECT MIN(snapshot_timestamp) + FROM temp.product_units + WHERE product_name = 'Apple Pie' +); +SELECT * +FROM temp.product_units; -- UPDATE /* 1.We want to add the current_quantity to the product_units table. @@ -128,6 +205,29 @@ Finally, make sure you have a WHERE statement to update the right row, you'll need to use product_units.product_id to refer to the correct row within the product_units table. When you have all of these components, you can run the update statement. */ +ALTER TABLE temp.product_units +ADD current_quantity INT; +WITH last_quantity AS ( + SELECT *, MAX(market_date) as last_date + FROM vendor_inventory + GROUP BY product_id +),pu_lq AS ( + SELECT * + FROM temp.product_units as pu + LEFT JOIN last_quantity as lq + ON pu.product_id = lq.product_id +), pu_lq_fix AS ( + SELECT *,COALESCE(quantity,0) as latest_quantity + FROM pu_lq +) +UPDATE temp.product_units +SET current_quantity = ( + SELECT latest_quantity + FROM pu_lq_fix + WHERE pu_lq_fix.product_id = temp.product_units.product_id +); +SELECT * +FROM temp.product_units;