Some type of privacy-preserving transformation must be applied to any data record from Industrial Internet of Things (IIoT) before it is disclosed to the researchers or analysts. Based on the existing privacy models such as Differential Privacy (DP) and k-anonymity, we extend the DP model to explicitly incorporate feature dependencies, and to produce guarantees of privacy in a probabilistic form that generalize k-anonymity. We assume that additional (external) knowledge of these relations and models can be represented in the form of joint probability distributions, such as Mutual Information (MI). We propose an enhanced definition of DP in conjunction with a realisation for non-randomizing anonymizing strategies such as binning, reducing the extent of binning required and preserving more valuable information for researchers. This allows the formulation of privacy conditions over the evolving set of features such that each feature can be associated its own allowance for privacy budget. As a case study, we consider an example from the Industrial Medical Internet of Things (IMIoT). We have identified some challenges that are not completely addressed by existing privacy models.Unlike physiological measurements in conventional medical environments, IMIoT is likely to result in duplicate and overlapping measurements, which can be associated with different personally identifiable items of information. As an example, we present a model of sequential feature collection.
|Title of host publication||Security and Privacy Trends in the Industrial Internet of Things|
|Number of pages||21|
|Publication status||E-pub ahead of print - 14 May 2019|