Main challenges of mining association rules with item taxonomy
- Consider the data set shown in Table 5.20 (439 page). (Chapter 5)
(a) Compute the support for itemsets {e}, {b, d}, and {b, d, e} by treating each transaction ID as a market basket.
(b) Use the results in part (a) to compute the confidence for the association rules {b, d} −→ {e} and {e} −→ {b, d}. Is confidence a symmetric measure?
(c) Repeat part (a) by treating each customer ID as a market basket. Each item should be treated as a binary variable (1 if an item appears in at least one transaction bought by the customer, and 0 otherwise). Use this result to compute the confidence for the association rules {b, d} −→ {e} and {e} −→ {b, d}.
. - Consider the transactions shown in Table 6.15, with an item taxonomy given in Figure 6.15 (515 page). (Chapter 6)
(a) What are the main challenges of mining association rules with item taxonomy?
(b) Consider the approach where each transaction t is replaced by an extended transaction t_ that contains all the items in t as well as their respective ancestors. For example, the transaction t = { Chips, Cookies} will be replaced by t_ = {Chips, Cookies, Snack Food, Food}. Use this approach to derive all frequent itemsets (up to size 4) with support ≥ 70%.
(c) Consider an alternative approach where the frequent itemsets are generated one level at a time. Initially, all the frequent itemsets involving items at the highest level of the hierarchy are generated. Next, we use the frequent itemsets discovered at the higher level of the hierarchy to generate candidate itemsets involving items at the lower levels of the hierarchy. For example, we generate the candidate itemset {Chips, Diet Soda} only if {Snack Food, Soda} is frequent. Use this approach to derive all frequent itemsets (up to size 4) with support ≥ 70%. - Consider a data set consisting of 220 data vectors, where each vector has 32 components and each component is a 4-byte value. Suppose that vector quantization is used for compression and that 216 prototype vectors are used. How many bytes of storage does that data set take before and after compression and what is the compression ratio? (Chapter 7)