---
title: Handbook of Rough Set Extensions  and Uncertainty Models
tags: 
author: [Testing](https://docswell.com/user/3200568)
site: [Docswell](https://www.docswell.com/)
thumbnail: https://bcdn.docswell.com/page/D7Y4MZWGEM.jpg?width=480
description: Handbook of Rough Set Extensions  and Uncertainty Models
published: April 09, 26
canonical: https://docswell.com/s/3200568/K4NV91-2026-04-09-005559
---
# Page. 1

![Page Image](https://bcdn.docswell.com/page/D7Y4MZWGEM.jpg)



# Page. 2

![Page Image](https://bcdn.docswell.com/page/VENYW398J8.jpg)

Takaaki Fujita, Florentin Smarandache
Handbook of Rough Set Extensions
and Uncertainty Models
Neutrosophic Science International Association (NSIA)
Publishing House
Gallup - Guayaquil
United States of America – Ecuador
2026


# Page. 3

![Page Image](https://bcdn.docswell.com/page/Y79PX92XE3.jpg)

Editor:
Neutrosophic Science International Association (NSIA)
Publishing House
https://fs.unm.edu/NSIA/
Division of Mathematics and Sciences
University of New Mexico
705 Gurley Ave., Gallup Campus
NM 87301, United States of America
University of Guayaquil
Av. Kennedy and Av. Delta
“Dr. Salvador Allende” University Campus
Guayaquil 090514, Ecuador
Peer-Reviewers:
Maikel Leyva-Vázquez
Universidad de Guayaquil, Guayas, ECUADOR
maikel.leyvav@ug.edu.ec
Victor Christianto
Malang Institute of Agriculture (IPM), Malang, INDONESIA
victorchristianto@gmail.com


# Page. 4

![Page Image](https://bcdn.docswell.com/page/G78D29597D.jpg)

Contents in this book
The remainder of this book is organized as follows.
1 Introduction
1.1 Uncertain Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Rough Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
5
5
6
2 Types of Rough Set
2.1 Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Generalized rough sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 HyperRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 (m, n)-SuperHyperRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 MultiRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 Weighted Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7 Neighborhood Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8 Sequential Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9 ContraRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.10 Probabilistic Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.11 IndetermRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.12 HesiRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.13 GraphicRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.14 ClusterRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.15 Multipolar Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.16 Bipartite Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.17 TreeRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.18 ForestRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.19 Dynamic Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.20 L-valued rough sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.21 Graded Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.22 Linguistic Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.23 Weak Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.24 Decision-Theoretic Rough sets . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.25 Type-n Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.26 Dominance-based Rough set . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.27 Triangular rough set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.28 Game-theoretic rough sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
9
10
11
13
15
18
21
22
24
26
28
29
32
34
35
37
38
39
41
43
45
46
49
50
52
54
56
56
3


# Page. 5

![Page Image](https://bcdn.docswell.com/page/L7LM2WYMJR.jpg)

4
2.29 Variable precision rough set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.30 Multi-granulation rough set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.31 Soft Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.32 Soft Rough Expert Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.33 Covering-based Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.34 Local Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.35 Interval-valued Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.36 Tolerance Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.37 One–directional s–Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.38 Complex Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.39 MetaRough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.40 T -valued Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.41 Refined Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.42 Rough cubic sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.43 MOD Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.44 Topological Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.45 Preorder Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.46 Directed Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.47 Strait Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
2.48 Dialectical rough set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
2.49 Sheaf Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
2.50 Simplicial Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.51 Persistent Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.52 Causal Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.53 Entropy-Regularized Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
2.54 Differentially-Private Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3 Uncertain Rough Set
105
3.1 Fuzzy Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.2 Intuitionistic Fuzzy Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.3 Vague Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.4 Neutrosophic Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.5 Plithogenic Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.6 Uncertain Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.7 Functorial Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.8 Near Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.9 Z-Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.10 D-Rough Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
3.11 Similarity-based rough sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4 Some Related Concepts for Rough Sets
129
4.1 Rough Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.2 Rough topological spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.3 Rough group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.4 Rough Matroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.5 Soft Rough Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5 Conclusion
139
Appendix (List of Tables)
143


# Page. 6

![Page Image](https://bcdn.docswell.com/page/4EMY89N6EW.jpg)

Chapter 1
Introduction
1.1 Uncertain Theory
Classical (crisp) set theory provides a precise and widely used language for formal reasoning
and mathematical modeling [1]. Over the past decades, many generalized set frameworks have
been introduced to represent uncertainty and vagueness, including fuzzy sets [2], intuitionistic
fuzzy sets [3], hesitant fuzzy sets [4], picture fuzzy sets [5], single-valued neutrosophic sets [6, 7],
quadripartitioned neutrosophic sets [8], pentapartitioned neutrosophic sets [9], double-valued
neutrosophic sets [10], hesitant neutrosophic sets [11], rough sets [12,13], plithogenic sets [14,15],
and soft sets [16, 17].
A fuzzy set assigns to each element x a single membership grade µ(x) ∈ [0, 1], thereby capturing
gradual inclusion rather than a sharp yes/no decision [2,18]. Neutrosophic sets extend this viewpoint by associating three (generally independent) degrees T (x), I(x), F (x) ∈ [0, 1], interpreted
as truth, indeterminacy, and falsity, respectively [6,19]. Because these models encode uncertainty
more flexibly than crisp sets, they have been applied widely, for example in decision-making [20],
and neural networks [21, 22].
1.2 Rough Set Theory
Rough set theory models uncertainty by approximating concepts with lower and upper sets induced by indiscernibility relations in data tables [12, 23]. Rough set research provides rigorous
tools to model vagueness from indiscernibility, enabling data-driven approximations [24], feature
reduction [25, 26], rule induction, and decision supports [27, 28], machine-learning [29], neural
network [30, 31], engineering [32, 33], chemistry [34, 35], diagnostics [36], and explainability applications. For reference, a concise comparison between classical (crisp) sets and Pawlak rough
sets is provided in Table 1.1.
5


# Page. 7

![Page Image](https://bcdn.docswell.com/page/PER95GDNJ9.jpg)

Chapter 1. Introduction
6
Table 1.1: Concise comparison of classical (crisp) sets and Pawlak rough sets.
Aspect
Classical (crisp) set
Rough set (Pawlak approximation)
Basic data
Universe U and an exact subset A ⊆
U
Representation of a concept
Single set A (exact concept)
Membership semantics
Binary: x ∈ A or x ∈
/A
Uncertainty source
Boundary / vagueness
No intrinsic uncertainty (perfect
discernibility assumed)
No intrinsic boundary region
Definability criterion
Always exact as given
Typical analysis outputs
Exact set operations, predicates,
counts
Approximation space (U, R) where
R is an equivalence (indiscernibility) relation; target Y ⊆ U
Pair (Y , Y ) (lower/upper approximations)
Definite: x ∈ Y iff [x]R ⊆ Y ; Possible: x ∈ Y iff [x]R ∩ Y 6= ∅
Indiscernibility of objects under
available attributes / data tables
Boundary BND(Y ) = Y \ Y captures the vague/undecidable part
Y is crisp/definable w.r.t. R iff Y =
Y (equivalently, Y is a union of Rclasses)
Approximations/regions; often used
for feature reduction and rule induction
1.3
Our Contributions
In light of these developments, research on rough set theory remains important. Moreover, because a large number of papers on rough sets and their extensions continue to appear, survey-style
works play an increasingly valuable role in organizing and clarifying the landscape. Motivated
by this need, this book provides a survey-style overview of rough set theory and its major developments, with an emphasis on how rough approximations support data-driven tasks.
More concretely, the book is organized as follows:
• Types of rough sets (Chapter 2). We systematically summarize a broad spectrum of
rough-set extensions, giving concise definitions and brief discussions to clarify how each model
generalizes Pawlak’s original framework.
• Uncertain rough sets (Chapter 3). We review representative hybridizations that incorporate graded or multi-component uncertainty, including fuzzy, intuitionistic fuzzy, neutrosophic, plithogenic, and related uncertain rough-set formulations.
• Related concepts (Chapter 4). We collect adjacent structures and viewpoints—such
as rough graphs, rough topological spaces, rough groups, rough matroids, and soft rough
graphs—to help readers connect rough approximations with other mathematical frameworks.
Overall, our goal is to provide a compact reference that helps readers navigate the rapidly
expanding rough-set literature and quickly locate suitable models for theoretical study and practical applications. For reference, Table 1.2 presents an at-a-glance taxonomy for rough-set–based
models.
In addition, for reference, Table 1.3 provides a practical selection guide on which rough-set family
to use under typical data conditions.


# Page. 8

![Page Image](https://bcdn.docswell.com/page/P7XQKX18EX.jpg)

7
Chapter 1. Introduction
Table 1.2: At-a-glance taxonomy for rough-set–based models (granulation, value semantics, outputs, and typical uses).
Axis
Main options (keywords)
Meaning / typical choice criteria
Granulation
equivalence, tolerance, covering,
neighborhood, probabilistic
Value
semantics
crisp, fuzzy, intuitionistic fuzzy,
neutrosophic, plithogenic,
structure-valued
Outputs
(X, X), regions, reduct, rules
Representative uses
feature selection, rule induction,
ranking, streaming, classification
Defines the indiscernibility/approximation
mechanism. Equivalence (partition, crisp
indiscernibility); Tolerance (similarity-based,
reflexive/symmetric); Covering (non-partition
granules, overlaps); Neighborhood
(metric/graph/kNN style, local granules);
Probabilistic (variable precision, error-tolerant
approximations).
Specifies how membership/uncertainty is
represented. Crisp: {0, 1}; Fuzzy: µ ∈ [0, 1];
Intuitionistic fuzzy: (µ, ν) with hesitation;
Neutrosophic: (T, I, F ) with explicit indeterminacy;
Plithogenic: attribute-wise appurtenance +
contradiction degree; Structure-valued:
intervals/vectors/distributions/sets as labels.
What the model produces. Approximations:
lower/upper (X, X); Regions: POS/BND/NEG
(positive/boundary/negative); Reduct:
feature/attribute reduction preserving discernibility;
Rules: decision/association rules (often
interpretable).
Common application targets. Feature selection via
reducts; rule induction for interpretable decision
support; Ranking via dominance/ordering-aware
rough sets; Streaming/incremental updates for
dynamic data; Classification via
approximations/regions/rules.
Please note that this book is a survey that focuses primarily on theoretical aspects. We hope
that future work by domain experts will advance algorithm design and other methodological
developments, as well as practical studies using machine learning and related techniques.


# Page. 9

![Page Image](https://bcdn.docswell.com/page/37K95W2L7D.jpg)

Chapter 1. Introduction
8
Table 1.3: Practical selection guide: which rough-set family to use under typical data conditions.
Data condition / requirement
Recommended direction (granulation / values / outputs)
Strict indiscernibility; categorical attributes; classical setting
Similarity/approximate matching; noise; continuous attributes
equivalence + crisp; outputs: (X, X), regions;
reduct for feature selection.
tolerance or neighborhood; values: crisp/fuzzy;
outputs: approximations, rules; optionally probabilistic for error tolerance.
covering; values: crisp/fuzzy; outputs: regions +
rules (covering-based rule induction).
choose value semantics: intuitionistic fuzzy / neutrosophic / plithogenic; keep granulation as tolerance/covering/neighborhood as needed.
dominance / order-aware rough sets (often probabilistic/variable precision variants); outputs: ranking + rules.
neighborhood/covering with incremental approximations; outputs: incremental reduct / online rule
updates.
Overlapping groups; multiple granular views; rule
mining under overlaps
Uncertainty beyond fuzziness (hesitation / inconsistency / indeterminacy)
Ranking/ordered decision criteria (e.g., credit
scoring, risk assessment)
Dynamic / streaming / incremental updates
Abstract
Rough set theory models uncertainty by approximating target concepts via lower and upper
sets induced by indiscernibility (or more general granulation) relations in data tables. This
perspective captures vagueness caused by limited observational resolution and supports settheoretic reasoning about what can be determined with certainty versus what remains only
possible.
The present book is written as a model map: rather than developing a single algorithmic pipeline
in depth, we provide a systematic survey of the main rough-set paradigms and their extension
routes. Concretely, we organize representative variants according to (i) the underlying granulation mechanism (e.g., equivalence-, tolerance-, covering-, neighborhood-, and probabilistic-based
approximations) and (ii) the uncertainty semantics attached to data and relations (e.g., crisp,
fuzzy, intuitionistic fuzzy, neutrosophic, and plithogenic settings), and we explain how each choice
changes the form of approximations and the interpretation of boundary regions. Throughout,
small illustrative examples are used to clarify modeling intent and typical use-cases in classification and decision support.
Finally, we note an important scope clarification: if the purpose of this book is limited to a model
map, then the Abstract/Introduction should not lead readers to expect that feature reduction and
rule induction are primary goals. Those topics are central in the rough-set literature, but here
they are discussed mainly as motivating applications and as pointers to the broader ecosystem;
the main objective is to align the book’s stated aim with its actual focus on surveying and
positioning rough-set models and extensions.
Keywords: Rough Set, Rough Theory, Uncertain Theory, Fuzzy Set


# Page. 10

![Page Image](https://bcdn.docswell.com/page/LJ3WK146J5.jpg)

Chapter 2
Types of Rough Set
As types of rough sets, a wide variety of extended rough-set models have been proposed. In this
chapter, we provide a survey-style introduction and brief discussion of these extensions.
2.1 Rough Set
Rough set theory approximates a target set via lower and upper approximations induced by
equivalence classes, modeling vagueness and uncertainty [12, 37].
Definition 2.1.1 (Rough Set Approximation). [38] [12, 37] Let X be a finite universe and let
R ⊆ X ×X
be an equivalence relation, whose equivalence classes are written [x]R for each x ∈ X . For any
subset Y ⊆ X , define:


Y = x ∈ X | [x]R ⊆ Y ,
Y = x ∈ X | [x]R ∩ Y 6= ∅ .
Here Y collects all elements whose entire indiscernibility class lies inside Y (those that definitely
belong), while Y gathers elements whose class meets Y nontrivially (those that possibly belong).
The pair (Y , Y ) is called the rough approximation of Y , and satisfies
Y ⊆ Y ⊆ Y.
Example 2.1.2 (Medical triage with incomplete symptom information). Let U = {p1 , p2 , p3 , p4 , p5 , p6 }
be a set of patients. Consider two easily observed symptoms (condition attributes):
C = {Fever, Cough},
where Fever ∈ {High, Normal} and Cough ∈ {Yes, No}. Assume the recorded symptom table is:
Patient
Fever
Cough Diagnosis (ground truth)
p1
p2
p3
p4
p5
p6
High
High
High
High
Normal
Normal
Yes
Yes
No
No
No
No
Flu
Cold
Flu
Flu
Healthy
Healthy
9


# Page. 11

![Page Image](https://bcdn.docswell.com/page/8JDK3XQMEG.jpg)

Chapter 2. Types of Rough Set
10
Define the indiscernibility relation R = IND(C) by
(x, y) ∈ R
⇐⇒
x and y have identical values of Fever and Cough.
Then the equivalence classes are
[p1 ]R = {p1 , p2 },
[p3 ]R = {p3 , p4 },
[p5 ]R = {p5 , p6 }.
Let the target concept be the set of influenza cases
Y := {p ∈ U | p is diagnosed as Flu} = {p1 , p3 , p4 }.
The Pawlak lower and upper approximations of Y are
Y = {x ∈ U | [x]R ⊆ Y } = {p3 , p4 },
Y = {x ∈ U | [x]R ∩ Y 6= ∅} = {p1 , p2 , p3 , p4 }.
Hence the boundary region is
BND(Y ) = Y \ Y = {p1 , p2 },
which reflects the real-life ambiguity: patients with High fever and Yes cough cannot be classified
with certainty as Flu using only these two symptoms.
2.2
Generalized rough sets
Generalized rough sets extend Pawlak approximations by replacing equivalence relations with
broader relations or granulations, yielding flexible lower–upper approximation operators [39–43].
Definition 2.2.1 (Generalized rough set (relation-based)). Let U and W be nonempty universes
and let R ⊆ U × W be an arbitrary binary relation. For each x ∈ U , define the successor
neighborhood (image) of x by
R(x) := { y ∈ W | (x, y) ∈ R } ⊆ W.
For any target set A ⊆ W , the lower and upper approximations of A with respect to (U, W, R)
are defined by
R(A) := { x ∈ U | R(x) ⊆ A },
R(A) := { x ∈ U | R(x) ∩ A 6= ∅ }.
The ordered pair

R(A), R(A)
is called the generalized rough set of A (or the R-rough set induced by A).
Example 2.2.2 (Eco-conscious customers via a customer–product relation). Let U be a set of
customers and W a set of products:
U = {Alice, Bob, Carol, Dan},
W = {p1 , p2 , p3 , p4 }.
Interpret the binary relation R ⊆ U × W as “customer x purchased product y during the last
month”. Assume the recorded purchases are:
R(Alice) = {p1 , p2 },
R(Bob) = {p2 , p3 },
R(Carol) = {p3 },
R(Dan) = {p2 }.


# Page. 12

![Page Image](https://bcdn.docswell.com/page/VEPK4PLQ78.jpg)

11
Chapter 2. Types of Rough Set
Aspect
Pawlak rough set (classical)
Generalized rough set (relation-based)
Universes
Single universe U
Underlying relation
Equivalence relation E ⊆ U × U (indiscernibility)
Equivalence class [x]E = {u ∈ U :
(x, u) ∈ E}
A⊆U
E(A) = {x ∈ U : [x]E ⊆ A}
E(A) = {x ∈ U : [x]E ∩ A 6= ∅}
BNDE (A) = E(A) \ E(A)
A is exact iff E(A) = E(A) (i.e., A is
a union of E -classes)
Models uncertainty from indiscernibility (partition of U )
Two universes U (objects) and W (attributes/targets)
Arbitrary binary relation R ⊆ U × W
Neighborhood of x
Target concept
Lower approximation
Upper approximation
Boundary region
Exactness / definability
Expressiveness
Special-case relation
—
Successor neighborhood R(x) = {y ∈
W : (x, y) ∈ R}
A⊆W
R(A) = {x ∈ U : R(x) ⊆ A}
R(A) = {x ∈ U : R(x) ∩ A 6= ∅}
BNDR (A) = R(A) \ R(A)
A is exact (w.r.t. (U, W, R)) iff
R(A) = R(A)
Models uncertainty from general relational links (may be non-symmetric,
non-transitive, non-reflexive; crossuniverse)
If U = W and R = E is an equivalence relation, then R(x) = [x]E and
(R(A), R(A)) = (E(A), E(A))
Table 2.1: Concise comparison of Pawlak rough sets and relation-based generalized rough sets.
Let the target set of eco-labeled products be
A = {p1 , p2 } ⊆ W.
Then the generalized (relation-based) lower and upper approximations of A are
R(A) = {x ∈ U | R(x) ⊆ A} = {Alice, Dan},
R(A) = {x ∈ U | R(x) ∩ A 6= ∅} = {Alice, Bob, Dan}.
Hence, Alice and Dan are definitely eco-consumers (all their purchases are eco-labeled), Bob is
possibly eco-consumer (at least one eco-labeled purchase), and Carol lies in the negative region
(no eco-labeled purchase).
For reference, a concise comparison between Pawlak rough sets and relation-based generalized
rough sets is provided in Table 2.1.
2.3 HyperRough Set
The HyperRough Set extends rough set theory by incorporating multiple attributes. Its formal
definition is given below [44].
Definition 2.3.1 (HyperRough Set). [44] Let X be a nonempty finite universe, and let
T1 , T2 , . . . , Tn be n distinct attributes with corresponding domains J1 , J2 , . . . , Jn . Define the
Cartesian product
J = J1 × J2 × · · · × Jn .
Let R ⊆ X × X be an equivalence relation on X , with [x]R denoting the equivalence class of x.
A HyperRough Set over X is a pair (F, J), where:


# Page. 13

![Page Image](https://bcdn.docswell.com/page/27VVX2QP7Q.jpg)

Chapter 2. Types of Rough Set
12
• F : J → P(X) is a mapping that assigns to each attribute value combination a =
(a1 , a2 , . . . , an ) ∈ J a subset F (a) ⊆ X .
• For each a ∈ J , the rough set approximations of F (a) are defined as
F (a) = {x ∈ X | [x]R ⊆ F (a)},
F (a) = {x ∈ X | [x]R ∩ F (a) 6= ∅}.
Here, F (a) comprises all elements whose equivalence classes are completely contained within
F (a), while F (a) contains elements whose equivalence classes intersect F (a). Additionally, the
following properties hold for all a ∈ J :
• F (a) ⊆ F (a).
• If F (a) = ∅, then F (a) = F (a) = ∅.
• If F (a) = X , then F (a) = F (a) = X .
Example 2.3.2 (Loan screening with partially observed applicant profiles). Let
X = {x1 , x2 , x3 , x4 , x5 , x6 }
be a set of loan applicants. Consider three attributes
T1 = Employment ∈ J1 := {Stable, Unstable},
T2 = Credit ∈ J2 := {Good, Bad},
T3 = Income ∈ J3 := {High, Low},
so that
J = J1 × J2 × J3 .
Assume the (true) applicant table is
Applicant Employment Credit Income
x1
x2
x3
x4
x5
x6
Stable
Stable
Stable
Stable
Unstable
Unstable
Good
Good
Bad
Bad
Good
Bad
High
Low
High
Low
Low
Low
Define the HyperRough mapping F : J → P(X) by
F (e, c, i) := { x ∈ X | (Employment(x), Credit(x), Income(x)) = (e, c, i) }.
For instance,
F (Stable, Good, High) = {x1 },
F (Stable, Good, Low) = {x2 }.


# Page. 14

![Page Image](https://bcdn.docswell.com/page/5JGLVRWQ7L.jpg)

13
Chapter 2. Types of Rough Set
In practice, a bank may only observe Employment and Credit at the first stage, while Income is
verified later. Model this limited observability by the equivalence relation R ⊆ X × X :
(x, y) ∈ R
⇐⇒
Employment(x) = Employment(y) and Credit(x) = Credit(y).
Then
[x1 ]R = [x2 ]R = {x1 , x2 },
[x3 ]R = [x4 ]R = {x3 , x4 },
[x5 ]R = {x5 },
[x6 ]R = {x6 }.
Now consider the attribute-combination a = (Stable, Good, High). Its rough approximations are
F (a) = { x ∈ X | [x]R ⊆ F (a) } = ∅,
F (a) = { x ∈ X | [x]R ∩ F (a) 6= ∅ } = {x1 , x2 }.
Interpretation: using only the coarse information (Employment, Credit), the bank cannot definitely identify a “Stable–Good–High” applicant (lower approximation is empty), but it can
identify those who are possibly in that profile (upper approximation contains x1 and x2 ), since
x1 and x2 are indiscernible at this stage.
By contrast, for a0 = (Unstable, Good, Low) we have F (a0 ) = {x5 } and
F (a0 ) = F (a0 ) = {x5 },
because [x5 ]R = {x5 } is a singleton granule under the observed attributes.
2.4 (m, n)-SuperHyperRough Set
A (m,n)-SuperHyperRough set maps mth-iterated subsets to nth-iterated subsets, using lifted
relations to form rough lower/upper approximations within hierarchical power-set levels [45].
Let X be a nonempty finite universe and let
R ⊆ X ×X
be an equivalence relation on X . We write [x]R = { y ∈ X | (x, y) ∈ R} for the R‑equivalence
class of x. For each k ≥ 0, define the iterated power set

P 0 (X) = X, P k+1 (X) = P P k (X) .
We will lift R to an equivalence Rk on P k (X) and then define (m, n)-SuperHyperRough Sets.
Definition 2.4.1 (Lifted Relation Rk ). Define recursively for each k ≥ 0:
R0 = R ⊆ X × X,
and for k ≥ 1,
Rk ⊆ P k (X) × P k (X)
by declaring, for A, B ∈ P k (X),
A Rk B
⇐⇒


∀ a ∈ A ∃ b ∈ B : (a, b) ∈ Rk−1 ∧ ∀ b ∈ B ∃ a ∈ A : (a, b) ∈ Rk−1 .


# Page. 15

![Page Image](https://bcdn.docswell.com/page/47QY6V3WEP.jpg)

Chapter 2. Types of Rough Set
14
Definition 2.4.2 ((m, n)-SuperHyperRough Set). Fix integers m, n ≥ 0. An (m, n)-SuperHyperRough
Set on (X, R) is a function
F : P m (X) −→ P n (X).
For each A ∈ P m (X), set C = F (A) ∈ P n (X). Its lower and upper approximations in P n−1 (X)
are
C = { B ∈ P n−1 (X) | [B]R n−1 ⊆ C},
C = { B ∈ P n−1 (X) | [B]R n−1 ∩ C 6= ∅},

where [B]R n−1 = { D ∈ P n−1 (X) | B R n−1 D}. Thus each A yields the rough pair F (A), F (A) .
Example 2.4.3 ((1,2)-SuperHyperRough set for grocery-bundle recommendation). Let X be a
finite set of products
X = {mo , mr , bw , bg },
where mo denotes organic milk, mr regular milk, bw white bread, and bg whole-grain bread.
Define an equivalence relation R ⊆ X × X expressing category indiscernibility:


x R y ⇐⇒
x, y ∈ {mo , mr } or x, y ∈ {bw , bg } .
Hence the R-classes are
[mo ]R = [mr ]R = {mo , mr },
[bw ]R = [bg ]R = {bw , bg }.
Fix (m, n) = (1, 2). Then P 1 (X) = P(X) and P 2 (X) = P(P(X)). We interpret A ∈ P 1 (X)
as a shopping basket, and F (A) ∈ P 2 (X) as a collection of suggested bundles (each suggested
bundle is itself a subset of X ).
Consider the basket
A = {mo , bg } ∈ P 1 (X) (organic milk + whole-grain bread).
Suppose the recommender proposes a small list of alternatives at A:

C := F (A) = {mo , bg }, {mr , bg }, {mo , bw } ∈ P 2 (X).
Lift R to an equivalence R1 on P 1 (X) = P(X) (cf. Definition 2.4.1) by requiring mutual
category-wise match: for B1 , B2 ⊆ X ,




B1 R1 B2 ⇐⇒
∀u ∈ B1 ∃v ∈ B2 : u R v ∧ ∀v ∈ B2 ∃u ∈ B1 : u R v .
In particular, the R1 -class of the “milk + bread” bundle {mo , bg } is

E := [{mo , bg }]R1 = {mo , bg }, {mr , bg }, {mo , bw }, {mr , bw } ,
because any bundle consisting of one milk and one bread is indistinguishable at the category
level.
We now form the rough approximations of C ⊆ P 1 (X) with respect to R1 :
C = { B ∈ P 1 (X) | [B]R1 ⊆ C },
C = { B ∈ P 1 (X) | [B]R1 ∩ C 6= ∅ }.


# Page. 16

![Page Image](https://bcdn.docswell.com/page/KE4W4M11J1.jpg)

15
Chapter 2. Types of Rough Set
Aspect
Rough set (Pawlak)
(m, n)-SuperHyperRough set [45]
Base universe
Single universe X
Underlying relation
Equivalence R ⊆ X × X (indiscernibility on objects)
[x]R ⊆ X for x ∈ X
A fixed set A ⊆ X
Same base X , but works on iterated
levels P k (X)
Lifted equivalences Rk on P k (X),
defined recursively from R
[B]Rk ⊆ P k (X) for B ∈ P k (X)
For each input A ∈ P m (X), a level-n
target C = F (A) ∈ P n (X)
Uncertainty for higher-order / hierarchical objects (sets of sets, etc.)
C = {B ∈ P n−1 (X) | [B]Rn−1 ⊆
C} ⊆ P n−1 (X)
C = {B ∈ P n−1 (X) | [B]Rn−1 ∩
C 6= ∅} ⊆ P n−1 (X)
BND(C) = C\C (at level P n−1 (X))
A function F : P m (X) → P n (X)
(context-dependent: each input A
yields a rough pair)
If n = 1 and F returns a subset of
X , then approximations are taken in
P 0 (X) = X using R0 = R (Pawlaktype behavior).
Basic neighborhood
Target to approximate
What is being modeled
Lower approximation
Uncertain classification of individuals
in X
A = {x ∈ X | [x]R ⊆ A} ⊆ X
Upper approximation
A = {x ∈ X | [x]R ∩ A 6= ∅} ⊆ X
Boundary region
Input–output form
BND(A) = A \ A
Approximates one target set A
(static)
Reduction to Pawlak case
—
Table 2.2: Concise comparison of Pawlak rough sets and (m, n)-SuperHyperRough sets.
Since E 6⊆ C (the bundle {mr , bw } is not included in the recommendation list), no element of
E is certainly recommended; in particular,
C ∩ E = ∅.
On the other hand, for every B ∈ E we have [B]R1 = E and E ∩ C 6= ∅, so
E ⊆ C.
Thus, under the coarse indiscernibility “same category,” the system can only assert that some
milk–bread combination is recommended (membership in the upper approximation), while it
cannot certify which specific variant is recommended with certainty (the lower approximation is
empty on E ).
For reference, Table 2.2 provides a concise comparison of Pawlak rough sets and (m, n)-SuperHyperRough
sets.
2.5 MultiRough Set
MultiRough set assigns for each equivalence relation in a family Pawlak lower and upper approximations of a subset simultaneously indexed [44].


# Page. 17

![Page Image](https://bcdn.docswell.com/page/L71Y48G5JG.jpg)

Chapter 2. Types of Rough Set
16
Definition 2.5.1 (MultiRough set). Let X be a nonempty finite universe and let I be a
nonempty finite index set. For each i ∈ I , let
Ri ⊆ X × X
be an equivalence relation, and write the Ri -equivalence class of x ∈ X as
[x]Ri := { y ∈ X | (x, y) ∈ Ri }.
For any Y ⊆ X , define the (Pawlak) lower and upper approximations with respect to Ri by
Y i :=

x ∈ X [x]Ri ⊆ Y ,
Y
i
:=

x ∈ X [x]Ri ∩ Y 6= ∅ .
A MultiRough set of Y (with respect to the family {Ri }i∈I ) is the indexed family
MRI (Y ) :=

I
i 
(Y i , Y ) i∈I ∈ P(X) × P(X) .
i
i
Remark 2.5.2. For every i ∈ I we have Y i ⊆ Y ⊆ Y , hence each component (Y i , Y ) is
an ordinary rough approximation pair. If |I| = 1, then MRI (Y ) reduces to the classical rough
approximation of Y . In an information system (X, A), a common choice is Ri = IND(Bi )
induced by attribute subsets Bi ⊆ A, so that MRI (Y ) records multiple attribute-granular rough
views of the same target set Y .
Example 2.5.3 (MultiRough set in hospital triage under multiple information sources). Let
X = {p1 , p2 , p3 , p4 , p5 , p6 } be a set of patients arriving at an emergency department. Let the
target concept be
Y := {p ∈ X | p truly requires ICU-level care} = {p1 , p3 , p4 }.
We consider two different (but common) ways of grouping patients into indiscernibility classes,
leading to two equivalence relations R1 and R2 on X .
(i) Coarse triage-vital grouping. Define R1 by equality of the pair (oxygen saturation
category, fever category):
p R1 q ⇐⇒ (O2Cat(p), FeverCat(p)) = (O2Cat(q), FeverCat(q)).
Assume this yields the partition
[p1 ]R1 = [p2 ]R1 = {p1 , p2 },
[p3 ]R1 = [p4 ]R1 = {p3 , p4 },
[p5 ]R1 = [p6 ]R1 = {p5 , p6 }.
Then the Pawlak approximations of Y w.r.t. R1 are
Y 1 = {x ∈ X | [x]R1 ⊆ Y } = {p3 , p4 },
Y
1
= {x ∈ X | [x]R1 ∩ Y 6= ∅} = {p1 , p2 , p3 , p4 }.
(ii) Imaging+lab grouping. Define R2 by equality of the pair (CT finding category, inflammationmarker category):
p R2 q ⇐⇒ (CTCat(p), InflamCat(p)) = (CTCat(q), InflamCat(q)).


# Page. 18

![Page Image](https://bcdn.docswell.com/page/G7WGXZKWE2.jpg)

17
Chapter 2. Types of Rough Set
Assume this yields the partition
[p1 ]R2 = [p3 ]R2 = {p1 , p3 },
[p2 ]R2 = [p4 ]R2 = {p2 , p4 },
[p5 ]R2 = [p6 ]R2 = {p5 , p6 }.
Then the Pawlak approximations of Y w.r.t. R2 are
Y 2 = {x ∈ X | [x]R2 ⊆ Y } = {p1 , p3 },
Y
2
= {x ∈ X | [x]R2 ∩ Y 6= ∅} = {p1 , p2 , p3 , p4 }.
Hence the MultiRough set of Y with respect to I = {1, 2} is
1
2 
MRI (Y ) = (Y 1 , Y ), (Y 2 , Y ) .
Interpretation: under vital-sign granulation, p3 , p4 are definitely ICU-cases, whereas under imaging+lab granulation, p1 , p3 are definitely ICU-cases; both views agree on the possible ICU-cases
{p1 , p2 , p3 , p4 }.
Proposition 2.5.4 (Monotonicity (componentwise)). Fix i ∈ I . If Y ⊆ Z ⊆ X , then
Y i ⊆ Zi
and
i
i
Y ⊆Z .
Consequently, MRI (Y ) is monotone in Y componentwise.
Proof. Fix i and assume Y ⊆ Z . If x ∈ Y i then [x]Ri ⊆ Y ⊆ Z , hence x ∈ Z i . If x ∈ Y
i
[x]Ri ∩ Y 6= ∅, and since Y ⊆ Z we also have [x]Ri ∩ Z 6= ∅, hence x ∈ Z .
i
then
An Iterated MultiRough Set repeatedly applies multiple rough approximations under several
equivalence relations, producing a recursively nested, indexed family of lower–upper approximation pairs.
Definition 2.5.5 (Iterated MultiRough types). Define a hierarchy of sets {Tk (X)}k≥0 recursively by
I
T0 (X) := P(X),
Tk+1 (X) := Tk (X) × Tk (X)
(k ≥ 0).
Thus, an element of Tk+1 (X) is a function
I → Tk (X) × Tk (X),
+
i 7−→ (A−
i , Ai ),
so Tk+1 (X) may be viewed as an I -indexed family of “rough pairs” at level k .
Definition 2.5.6 (Iterated MultiRough set). For each k ≥ 0, define a map
IMRI : P(X) −→ Tk (X)
(k)
recursively by
IMRI (Y ) := Y ∈ P(X),
(0)
and for k ≥ 0,
IMRI
(k+1)
(Y ) :=

IMRI
(k)


i
(k)
Y i , IMRI Y
i∈I
∈
I
Tk (X) × Tk (X) .
We call IMRI (Y ) the iterated MultiRough set of depth k of Y . In particular, IMRI (Y ) =
MRI (Y ) is the ordinary MultiRough set.
(k)
(1)


# Page. 19

![Page Image](https://bcdn.docswell.com/page/4JZL61Z2E3.jpg)

Chapter 2. Types of Rough Set
18
Theorem 2.5.7 (Well-definedness of iterated MultiRough sets). For every integer k ≥ 0 and
(k)
every Y ⊆ X , the value IMRI (Y ) is well-defined and satisfies
IMRI (Y ) ∈ Tk (X).
(k)
Equivalently, the recursion in Definition 2.5.6 defines a total function IMRI : P(X) → Tk (X)
for each k .
(k)
Proof. We proceed by induction on k .
Base case k = 0. For any Y ⊆ X , IMRI (Y ) = Y ∈ P(X) = T0 (X) by definition.
(0)
Inductive step. Assume for some k ≥ 0 that IMRI is well-defined and that IMRI (Z) ∈ Tk (X)
i
holds for all Z ⊆ X . Fix Y ⊆ X . For each i ∈ I , the Pawlak approximations Y i and Y
i
are subsets of X by construction, hence Y i , Y ∈ P(X). Therefore, the induction hypothesis
implies that
(k)
IMRI
(k)

Y i ∈ Tk (X) and
IMRI
(k)
(k)
Y
i
∈ Tk (X)
(i ∈ I).
Consequently, for each i ∈ I the ordered pair



i
(k)
(k)
IMRI Y i , IMRI Y
∈ Tk (X) × Tk (X)
is well-defined, and hence the I -indexed family of these pairs lies in
I
Tk (X) × Tk (X) = Tk+1 (X).
By Definition 2.5.6, this family is exactly IMRI
recursion defines a total function at level k + 1.
(k+1)
(Y ). Thus IMRI
(k+1)
(Y ) ∈ Tk+1 (X), and the
This completes the induction.
2.6
Weighted Rough Set
Weighted rough sets attach weights to attributes (or objects) and incorporate these weights into
approximation operators, so that imbalanced data and heterogeneous attribute importance can
be handled in a controlled manner [46–49]. As a related line of research, weighted fuzzy rough
sets have also been studied [50, 51].
Definition 2.6.1 (Information system). [52] An information system is a quadruple
S = (U, A, {Va }a∈A , {fa }a∈A ),
where:
• U is a finite, nonempty set of objects (e.g., patients, loan applicants, or regions).


# Page. 20

![Page Image](https://bcdn.docswell.com/page/YE6W2L86EV.jpg)

19
Chapter 2. Types of Rough Set
• A is a finite set of attributes.
• For each a ∈ A, Va is a nonempty set called the domain of a (e.g., VFever = {High, Normal}).
• For each a ∈ A, fa : U → Va assigns to each object x ∈ U an attribute value fa (x) ∈ Va .
Often, A is partitioned into condition attributes C and a (distinguished) decision attribute D,
and we write
S = (U, C ∪ {D}).
For example, in a medical diagnosis system, C may contain symptoms (Fever, Cough, Fatigue),
while D represents the diagnosis (e.g., Flu vs. NonFlu).
Definition 2.6.2 (Weighted rough set). [52] Let S = (U, C ∪ {D}) be an information system.
(i) Indiscernibility and classical lower approximation. For any B ⊆ C , the indiscernibility
relation induced by B is
(x, y) ∈ IND(B) ⇐⇒ ∀a ∈ B, fa (x) = fa (y).
It partitions U into equivalence classes [x]B . For a target set X ⊆ U , the Pawlak lower approximation is
X B := {x ∈ U | [x]B ⊆ X}.
(ii) Positive region and dependency degree. Let U / IND(D) denote the partition of U
into decision classes. The positive region of D with respect to B is
o
[n
POSB (D) :=
G ∈ U / IND(B)
∃ H ∈ U / IND(D) with G ⊆ H .
The dependency degree of D on B is
γB :=
| POSB (D)|
∈ [0, 1].
|U |
(iii) Attribute significance and weights. For a ∈ B , the significance of a (relative to B ) is
θ(a) := γB − γB\{a} (≥ 0).
Assuming
b∈B θ(b) &gt; 0, define the normalized weight of a by
P
θ(a)
.
b∈B θ(b)
w(a) := P
Then w(a) ≥ 0 and
a∈B w(a) = 1.
P
(iv) Weighted lower approximation (attribute-wise weighted inclusion). For each
single attribute a ∈ B , write [x]a := [x]{a} for the granule induced by {a}. Fix a threshold
α ∈ [0, 1] and define the score
X

σw (x; X, B) :=
w(a) I [x]a ⊆ X ,
a∈B


# Page. 21

![Page Image](https://bcdn.docswell.com/page/GE5M2195E4.jpg)

Chapter 2. Types of Rough Set
20
where I(ϕ) = 1 if ϕ is true and I(ϕ) = 0 otherwise. The weighted lower approximation of X
with respect to (B, w) is
B w (X) := { x ∈ U | σw (x; X, B) ≥ α }.
Remark. If one
P replaces I([x]a ⊆ X) by I([x]B ⊆ X) inside the sum, then the sum becomes
I([x]B ⊆ X) a∈B w(a) = I([x]B ⊆ X), and the weights have no effect. The attribute-wise
formulation above avoids this degeneracy and realizes a genuine weighted inclusion.
Example 2.6.3 (Medical triage as a weighted rough set). Let U = {p1 , p2 , p3 , p4 , p5 , p6 } be
a set of patients and consider S = (U, C ∪ {D}) with condition attributes C = {F, Cg, F a},
where F = Fever, Cg = Cough, F a = Fatigue, and D ∈ {Flu, NonFlu}.
The observed data are:
Patient
p1
p2
p3
p4
p5
p6
F Cg F a
D
H Y
Y
Flu
H Y
Y
Flu
H Y
N
Flu
H N Y NonFlu
N N N NonFlu
N N N NonFlu
Fix B = C . The IND(B)-granules are
[p1 ]B = {p1 , p2 },
[p3 ]B = {p3 },
[p4 ]B = {p4 },
[p5 ]B = {p5 , p6 }.
Each granule is decision-pure, hence POSB (D) = U and γB = 1.
Remove one attribute at a time:
• For B \ {F } = {Cg, F a}, the induced granules remain decision-pure, hence γB\{F } = 1.
• For B \{Cg} = {F, F a}, the granule {p1 , p2 , p4 } (same (F, F a) = (H, Y )) mixes Flu/NonFlu, so only {p3 } and {p5 , p6 } belong to the positive region. Thus | POSB\{Cg} (D)| = 3
and γB\{Cg} = 36 = 12 .
• For B \ {F a} = {F, Cg}, the induced granules are decision-pure again, hence γB\{F a} = 1.
Therefore,
θ(F ) = 0,
1
θ(Cg) = ,
2
θ(F a) = 0,
so the normalized weights are
w(Cg) = 1,
w(F ) = w(F a) = 0.


# Page. 22

![Page Image](https://bcdn.docswell.com/page/9729412GJR.jpg)

21
Chapter 2. Types of Rough Set
Let the target concept be the set of flu patients X = {p1 , p2 , p3 } and take α = 1. Since only Cg
has nonzero weight, the score reduces to

σw (x; X, B) = I [x]Cg ⊆ X ,
where the Cg -granules are
[p1 ]Cg = [p2 ]Cg = [p3 ]Cg = {p1 , p2 , p3 } ⊆ X,
[p4 ]Cg = [p5 ]Cg = [p6 ]Cg = {p4 , p5 , p6 } * X.
Hence,
B w (X) = {p1 , p2 , p3 }.
Interpretation: in this dataset, cough carries the full weight, and the weighted rough model
selects exactly those patients whose cough-granule is certainly contained in the flu concept.
2.7 Neighborhood Rough Set
Neighborhood rough sets approximate a target set using neighborhood granules induced by a
relation, enabling non-equivalence and covering-based models flexibly [53–56].
Definition 2.7.1 (Neighborhood approximation space and neighborhood rough approximations). Let U be a nonempty universe. A neighborhood approximation space is a pair (U, N ),
where
N : U → P(U ),
x 7→ N (x),
assigns to each x ∈ U a (nonempty) neighborhood N (x) ⊆ U . Typical choices include:
(i) Relation-induced neighborhoods: for a binary relation S ⊆ U × U , NS (x) := { y ∈ U |
(x, y) ∈ S }.
(ii) Metric neighborhoods: for a (pseudo-)metric d on U and δ ≥ 0, Nδ (x) := { y ∈ U |
d(x, y) ≤ δ }.
For any X ⊆ U , the neighborhood lower and neighborhood upper approximations of X (with
respect to N ) are defined by
aprN (X) := { x ∈ U | N (x) ⊆ X },
aprN (X) := { x ∈ U | N (x) ∩ X 6= ∅ }.
The neighborhood rough set of X is the pair

aprN (X), aprN (X) .
Example 2.7.2 (Neighborhood rough-set screening in predictive maintenance). Consider a factory with seven vibration sensors
U = {s1 , s2 , s3 , s4 , s5 , s6 , s7 }.
Each sensor si is described by a 2D feature vector
ϕ(si ) = (temperature deviation, vibration RMS) ∈ R2 ,


# Page. 23

![Page Image](https://bcdn.docswell.com/page/DJY4MZYG7M.jpg)

Chapter 2. Types of Rough Set
22
given by
si
s1
s2
s3
s4
s5
s6
s7
ϕ(si ) (0, 0) (0.8, 0.2) (1.6, 0.2) (2.1, 0.9) (5, 5) (5.7, 5.2) (6.5, 5.1)
and let d(si , sj ) := kϕ(si ) − ϕ(sj )k2 be the Euclidean distance. Fix δ = 1 and define the metric
neighborhoods
Nδ (s) = {t ∈ U | d(s, t) ≤ δ}
(s ∈ U ).
A direct computation yields the neighborhood system
Nδ (s1 ) = {s1 , s2 },
Nδ (s4 ) = {s3 , s4 },
Nδ (s2 ) = {s1 , s2 , s3 },
Nδ (s3 ) = {s2 , s3 , s4 },
Nδ (s5 ) = {s5 , s6 },
Nδ (s6 ) = {s5 , s6 , s7 },
Nδ (s7 ) = {s6 , s7 }.
Suppose that, after a manual inspection, the set of confirmed faulty sensors is
X = {s1 , s2 , s4 } ⊆ U.
Using neighborhood rough approximations,
aprN (X) = {s ∈ U | Nδ (s) ⊆ X},
δ
we obtain
aprN (X) = {s1 },
δ
aprNδ (X) = {s ∈ U | Nδ (s) ∩ X 6= ∅},
aprNδ (X) = {s1 , s2 , s3 , s4 }.
Hence the regions are
POSNδ (X) = {s1 },
BNDNδ (X) = {s2 , s3 , s4 },
NEGNδ (X) = {s5 , s6 , s7 }.
Interpretation: s1 is definitely within the faulty cluster (its whole neighborhood is faulty),
s2 , s3 , s4 are possibly faulty (their neighborhoods touch the faulty set), and s5 , s6 , s7 are definitely not near the faulty cluster under the tolerance δ = 1.
2.8
Sequential Rough Set
Sequential rough sets model evolving approximations by applying successive relations or timeindexed granules, updating lower/upper regions across stages of information [57, 58].
Definition 2.8.1 (Sequential rough approximations). Let U be a nonempty finite universe and
let
R = hR1 , R2 , . . . , Rm i
be an ordered finite family of equivalence relations on U (each Ri ⊆ U × U ). For x ∈ U , write
[x]Ri := { y ∈ U | (x, y) ∈ Ri }
for the Ri -equivalence class of x.
For each i ∈ {1, . . . , m} and each X ⊆ U , define the usual Pawlak lower/upper operators:
aprR (X) := { x ∈ U | [x]Ri ⊆ X },
i
aprRi (X) := { x ∈ U | [x]Ri ∩ X 6= ∅ }.


# Page. 24

![Page Image](https://bcdn.docswell.com/page/V7NYW3P8E8.jpg)

23
Chapter 2. Types of Rough Set
The sequential lower approximation of X along R is the iterated operator

aprRseq (X) := aprR ◦ aprR
◦ · · · ◦ aprR (X),
m
m−1
1
and the sequential upper approximation is defined dually by
aprRseq (X) := U \ aprRseq (U \ X).
The sequential rough set of X (with respect to R) is the pair

aprRseq (X), aprRseq (X) .
Example 2.8.2 (Two-stage medical triage as a sequential rough set). Let U = {p1 , p2 , p3 , p4 , p5 , p6 }
be patients arriving at an emergency department.
Stage 1 (symptom-screen granulation). Define an equivalence relation R1 on U by the
partition

U /R1 = {p1 , p2 , p3 }, {p4 , p5 , p6 } ,
where {p1 , p2 , p3 } are “high-suspicion” by symptoms and {p4 , p5 , p6 } are “lower-suspicion”.
Stage 2 (rapid-test granulation). Define an equivalence relation R2 on U by the partition

U /R2 = {p1 , p2 }, {p3 , p4 }, {p5 , p6 } ,
where each block represents indistinguishability by a rapid-test pattern.
Let the target concept be the (later confirmed) infected set
X = {p1 , p2 , p3 } ⊆ U.
Consider the ordered family R = hR1 , R2 i and the sequential lower approximation

aprRseq (X) := aprR ◦ aprR (X).
2
First,
1
aprR (X) = {p1 , p2 , p3 },
1
since the R1 -class {p1 , p2 , p3 } is contained in X , while {p4 , p5 , p6 } * X . Next,

aprR aprR (X) = aprR ({p1 , p2 , p3 }) = {p1 , p2 },
2
1
2
because the R2 -class {p1 , p2 } is fully contained in {p1 , p2 , p3 }, but {p3 , p4 } is not.
Hence,
aprRseq (X) = {p1 , p2 }.
The sequential upper approximation is defined dually by
aprRseq (X) := U \ aprRseq (U \ X).
Since U \ X = {p4 , p5 , p6 },
aprR (U \ X) = {p4 , p5 , p6 },
1
Therefore,
aprR ({p4 , p5 , p6 }) = {p5 , p6 }.
2
aprRseq (X) = U \ {p5 , p6 } = {p1 , p2 , p3 , p4 }.
After two stages, {p1 , p2 } are definitely infected (sequential lower), {p5 , p6 } are definitely not
infected (sequential negative region), and {p3 , p4 } lie in the boundary where the second-stage
granule {p3 , p4 } prevents a definitive decision.


# Page. 25

![Page Image](https://bcdn.docswell.com/page/YJ9PX93X73.jpg)

Chapter 2. Types of Rough Set
2.9
24
ContraRough Set
ContraRough sets incorporate contradiction degrees and thresholds to define lower/upper approximations, producing positive, boundary, negative regions under inconsistent evidence [59].
Definition 2.9.1 (ContraRough Set). [59] Let X 6= ∅ be a universe and let U ⊆ X be a target
concept.
(i) Contradiction kernels. A relation-contradiction kernel is a map
cR : X × X → [0, 1]
satisfying
cR (x, x) = 0 (reflexivity),
cR (x, y) = cR (y, x) (symmetry).
A membership-contradiction kernel for U is a map
cU : X → [0, 1],
where cU (y) quantifies how contradictory it is to assert y ∈ U (smaller values mean more
consistent membership).
(ii) Thresholded consistency regions. Fix thresholds (α, β, γ) ∈ [0, 1]3 with β ≤ γ . Define
the admitted relation and its (kernel) neighborhood by
R(α) := {(x, y) ∈ X × X | cR (x, y) ≤ α},
Nα (x) := {y ∈ X | cR (x, y) ≤ α}.
Define the definite and possible slices of U by
Udef (β) := {y ∈ X | cU (y) ≤ β},
Upos (γ) := {y ∈ X | cU (y) ≤ γ}.
(iii) ContraRough approximations. The ContraRough lower and ContraRough upper approximations of U are
CR
apr(α,β)
(U ) := {x ∈ X | Nα (x) ⊆ Udef (β)},
CR
apr(α,γ)
(U ) := {x ∈ X | Nα (x) ∩ Upos (γ) 6= ∅}.
The induced regions are defined as
CR
POS(α,β) (U ) := apr(α,β)
(U ),
CR
NEG(α,γ) (U ) := X \ apr(α,γ)
(U ),
CR
CR
BND(α,β,γ) (U ) := apr(α,γ)
(U ) \ apr(α,β)
(U ).


# Page. 26

![Page Image](https://bcdn.docswell.com/page/GJ8D29M9JD.jpg)

25
Chapter 2. Types of Rough Set
Example 2.9.2 (Contradictory incident reports classified by a ContraRough set). We illustrate
the ContraRough construction in a real-life fact-checking setting.
Universe and target concept. Let X be a set of short incident reports about the same
bridge-collapse event:
X = {r1 , r2 , r3 , r4 , r5 , r6 },
where r1 is an official emergency bulletin, r5 is a verified CCTV-based report, r2 , r3 are citizen
posts, and r4 , r6 are unverified viral posts. Let U ⊆ X denote the (unknown) concept “reliable
reports”.
(i) Contradiction kernels. Assume we have:
• a relation-contradiction kernel cR : X × X → [0, 1], where cR (x, y) measures how contradictory the factual content of reports x and y is (e.g., extracted-claim inconsistency; 0
means fully consistent);
• a membership-contradiction kernel cU : X → [0, 1], where cU (y) measures how contradictory it is to assert “y ∈ U ” (e.g., based on source reputation + cross-check signals; smaller
is more consistent).
We set
cU (r1 ) = 0.05,
cU (r2 ) = 0.25,
cU (r3 ) = 0.35,
cU (r4 ) = 0.55,
cU (r5 ) = 0.08,
cU (r6 ) = 0.75.
(ii) Thresholds and admitted neighborhoods. Choose thresholds (α, β, γ) = (0.20, 0.10, 0.40)
(with β ≤ γ ). Thus
Udef (β) = {y ∈ X | cU (y) ≤ 0.10} = {r1 , r5 },
Upos (γ) = {y ∈ X | cU (y) ≤ 0.40} = {r1 , r2 , r3 , r5 }.
Assume the admitted relation R(α) = {(x, y) | cR (x, y) ≤ 0.20} yields exactly the following
low-contradiction pairs (besides reflexivity):
cR (r1 , r5 ) = 0.15,
cR (r2 , r3 ) = 0.10,
cR (r4 , r6 ) = 0.05,
and all other distinct pairs have cR &gt; 0.20. Hence the admitted neighborhoods Nα (x) = {y ∈
X | cR (x, y) ≤ α} are
Nα (r1 ) = {r1 , r5 },
Nα (r5 ) = {r5 , r1 },
Nα (r2 ) = {r2 , r3 },
Nα (r3 ) = {r3 , r2 },
Nα (r4 ) = {r4 , r6 },
Nα (r6 ) = {r6 , r4 }.
(iii) ContraRough approximations and regions. The ContraRough lower approximation
collects reports whose entire admitted neighborhood lies inside the definite slice Udef (β):
CR
apr(α,β)
(U )
= {x ∈ X | Nα (x) ⊆ Udef (β)} = {r1 , r5 }.


# Page. 27

![Page Image](https://bcdn.docswell.com/page/LJLM2W3MER.jpg)

Chapter 2. Types of Rough Set
26
The ContraRough upper approximation collects reports whose admitted neighborhood intersects
the possible slice Upos (γ):
CR
apr(α,γ)
(U )
= {x ∈ X | Nα (x) ∩ Upos (γ) 6= ∅} = {r1 , r2 , r3 , r5 }.
Therefore the induced regions are
POS(α,β) (U ) = {r1 , r5 },
BND(α,β,γ) (U ) = {r2 , r3 },
NEG(α,γ) (U ) = {r4 , r6 }.
r1 and r5 are definitely reliable because everything they are (low-contradiction) consistent with
is also definitely reliable; r2 , r3 remain borderline (possibly reliable) due to weaker credibility
signals; and r4 , r6 fall into the negative region because their admitted neighborhoods do not
touch any possibly reliable report.
2.10 Probabilistic Rough Set
Probabilistic rough sets define approximations via conditional probability thresholds, permitting
bounded misclassification and yielding positive, boundary, negative regions for classification
[60–62]. Probabilistic rough sets have attracted a substantial volume of research in recent years
as well [63–65]. Related concepts include fuzzy probabilistic rough sets [66–68], probabilistic
variable precision rough sets [69, 70], and neutrosophic probabilistic rough sets [71, 72].
Definition 2.10.1 ((α, β )-probabilistic rough approximations / probabilistic rough set). [60–62]
Let U be a nonempty finite universe and let E ⊆ U × U be an equivalence relation. For x ∈ U ,
write the granule (equivalence class)
[x]E := { y ∈ U | (x, y) ∈ E }.
For any A ⊆ U , define the (rough-membership / conditional probability)
µA (x) := Pr(A | [x]E ) ≈
|A ∩ [x]E |
,
|[x]E |
(where the ratio gives the usual empirical estimate under a uniform assumption on [x]E ).
Fix thresholds α, β with 0 ≤ β &lt; α ≤ 1. The (α, β)-probabilistic lower and upper approximations of A are

apr(α,β) (A) := x ∈ U Pr(A | [x]E ) ≥ α ,

apr(α,β) (A) := x ∈ U Pr(A | [x]E ) &gt; β .
Equivalently, the induced three disjoint regions are

POS(α,β) (A) := x ∈ U | Pr(A | [x]E ) ≥ α ,

BND(α,β) (A) := x ∈ U | β &lt; Pr(A | [x]E ) &lt; α ,

NEG(α,β) (A) := x ∈ U | Pr(A | [x]E ) ≤ β .


# Page. 28

![Page Image](https://bcdn.docswell.com/page/47MY89D67W.jpg)

27
Chapter 2. Types of Rough Set
The (α, β)-probabilistic rough set of A is represented by the approximation pair

apr(α,β) (A), apr(α,β) (A) ,
(or equivalently by the triple (POS(α,β) (A), BND(α,β) (A), NEG(α,β) (A))).
If α = 1 and β = 0, then apr(1,0) (A) = {x | Pr(A | [x]E ) = 1} and apr(1,0) (A) = {x | Pr(A |
[x]E ) &gt; 0}, recovering the classical Pawlak lower/upper approximations.
Example 2.10.2 (Credit approval as a probabilistic rough set). Let U be a finite set of loan
applicants,
U = {a1 , a2 , . . . , a12 }.
Suppose applicants are described only by a coarse attribute tuple
(Income band, Credit-history flag) ∈ {H, M, L} × {Good, Bad},
and let E ⊆ U × U be the induced indiscernibility (equivalence) relation: x E y iff x and y share
the same tuple. Assume the resulting E -classes are
G1 = {a1 , a2 , a3 , a4 },
G2 = {a5 , a6 , a7 , a8 , a9 },
G3 = {a10 , a11 , a12 },
so that [x]E ∈ {G1 , G2 , G3 } for all x ∈ U .
Let the target concept A ⊆ U be the set of applicants who (based on historical outcomes) repay
on time:
A = {a1 , a2 , a3 , a4 , a5 , a6 , a7 }.
Following the probabilistic rough-set model, we estimate the conditional probability
Pr(A | [x]E ) ≈
|A ∩ [x]E |
,
|[x]E |
(x ∈ U ),
and choose thresholds 0 ≤ β &lt; α ≤ 1 to form the three regions POS(α,β) (A), BND(α,β) (A),
NEG(α,β) (A).
Take (α, β) = (0.8, 0.3). Then, at the class level,
Pr(A | G1 ) =
Hence,
|A ∩ G1 |
4
= = 1,
|G1 |
4
Pr(A | G2 ) =
3
= 0.6,
5
Pr(A | G3 ) =
0
= 0.
3
POS(0.8,0.3) (A) = G1 = {a1 , a2 , a3 , a4 },
BND(0.8,0.3) (A) = G2 = {a5 , a6 , a7 , a8 , a9 },
NEG(0.8,0.3) (A) = G3 = {a10 , a11 , a12 },
since 1 ≥ 0.8, 0.3 &lt; 0.6 &lt; 0.8, and 0 ≤ 0.3, respectively.
Therefore, the (0.8, 0.3)-probabilistic rough set of A is the approximation pair

apr(0.8,0.3) (A), apr(0.8,0.3) (A) =


POS(0.8,0.3) (A), POS(0.8,0.3) (A) ∪ BND(0.8,0.3) (A) = G1 , G1 ∪ G2 .
Applicants in POS(0.8,0.3) (A) can be automatically approved (high-confidence repayment), those
in NEG(0.8,0.3) (A) can be automatically rejected (high-confidence non-repayment), and those in
BND(0.8,0.3) (A) are sent to manual review (uncertain region).


# Page. 29

![Page Image](https://bcdn.docswell.com/page/P7R95G4NE9.jpg)

Chapter 2. Types of Rough Set
28
2.11 IndetermRough Set
IndetermRough set introduces an explicit indeterminate region between lower and upper approximations, capturing uncertainty beyond boundary using indiscernibility relations objects [73].
Related concepts include IndetermSoft sets (Indeterminacy Soft Sets) and related indeterminacybased soft-set models [74–76].
Definition 2.11.1 (IndetermRough Set). [73] Let U 6= ∅ be a universe. An indeterminate
(equivalence-like) relation on U is specified by a pair of binary relations
Rdef ⊆ Rpos ⊆ U × U,
where (x, y) ∈ Rdef means “x is definitely indiscernible from y ”, and (x, y) ∈ Rpos means “x is
possibly indiscernible from y ” (i.e., definite-or-indeterminate membership). Assume Rdef is an
equivalence relation (so definite indiscernibility is consistent).
An indeterminate subset of U is represented by a pair
X ∗ = (Xdef , Xpos ) with Xdef ⊆ Xpos ⊆ U,
where Xdef is the set of elements definitely in the concept and Xpos is the set of elements possibly
in the concept.
For each x ∈ U , define the definite and possible neighborhoods
Ndef (x) := {y ∈ U | (x, y) ∈ Rdef },
Npos (x) := {y ∈ U | (x, y) ∈ Rpos }.
The indeterminate lower and indeterminate upper approximations of X ∗ are


X ∗ := x ∈ U | Ndef (x) ⊆ Xdef ,
X ∗ := x ∈ U | Npos (x) ∩ Xpos 6= ∅ .
The pair (X ∗ , X ∗ ) is called
the IndetermRough approximation of X ∗ (under (Rdef , Rpos )), and

the triple X ∗ , X ∗ , X ∗ is referred to as an IndetermRough Set.
Example 2.11.2 (Emergency-room influenza triage with an explicit indeterminate region). Let
U be a set of six patients arriving at an emergency room:
U = {p1 , p2 , p3 , p4 , p5 , p6 }.
Each patient has (i) a rapid PCR outcome in {Pos, Neg, Unk} and (ii) a fever-status in {High, Low}.
Assume:
patient
p1
p2
p3
p4
p5
p6
PCR
Pos Pos Neg Neg Unk Unk
Fever High High Low Low High Low
Definite vs. possible indiscernibility. Define Rdef by

(pi , pj ) ∈ Rdef ⇐⇒
PCR(pi ) = PCR(pj ) ∧

Fever(pi ) = Fever(pj ) .
Then Rdef is an equivalence relation whose classes are
[p1 ]Rdef = {p1 , p2 },
[p3 ]Rdef = {p3 , p4 },
[p5 ]Rdef = {p5 },
[p6 ]Rdef = {p6 }.


# Page. 30

![Page Image](https://bcdn.docswell.com/page/PJXQKX287X.jpg)

29
Chapter 2. Types of Rough Set
Define the coarser (possible) relation Rpos by
⇐⇒
Fever(pi ) = Fever(pj ),
[p1 ]Rpos = {p1 , p2 , p5 },
[p3 ]Rpos = {p3 , p4 , p6 }.
(pi , pj ) ∈ Rpos
so that
Clearly Rdef ⊆ Rpos (definite agreement implies possible agreement).
Indeterminate target concept. Let X ∗ = (Xdef , Xpos ) represent the concept “patient has
influenza,” where
Xdef = {p1 , p2 } (PCR positive),
Xpos = {p1 , p2 , p5 } (PCR positive or PCR unknown with high fever).
Hence Xdef ⊆ Xpos ⊆ U .
Neighborhoods and IndetermRough approximations. Using Ndef (x) = [x]Rdef and Npos (x) =
[x]Rpos , the IndetermRough lower and upper approximations are
X ∗ = {x ∈ U | Ndef (x) ⊆ Xdef },
Compute:
X ∗ = {x ∈ U | Npos (x) ∩ Xpos 6= ∅}.
X ∗ = {p1 , p2 },
X ∗ = {p1 , p2 , p5 }.
Therefore, the explicit indeterminate region is
X ∗ \ X ∗ = {p5 },
and the “definitely not” region is U \ X ∗ = {p3 , p4 , p6 }.
Interpretation (real-life decision). Patients in X ∗ are treated as confirmed influenza (start
antivirals immediately), patients in U \ X ∗ as non-influenza, and patients in X ∗ \ X ∗ (here, p5 )
are indeterminate: isolate and order confirmatory testing because the evidence is neither definite
nor dismissible.
2.12 HesiRough Set
HesiRough set extends rough sets by modeling hesitant membership; lower/upper approximations
aggregate multiple possible degrees for each object under uncertainty. Related concepts include
hesitant fuzzy sets [4, 77] and hesitant neutrosophic sets [78–80], among others.
Definition 2.12.1 (HesiRough set (hesitation-based rough set) — refined). Let X 6= ∅ be a
universe. We model hesitancy by set-valued (hesitant) predicates
hR : X × X −→ P({0, 1}) \ {∅},
hU : X −→ P({0, 1}) \ {∅}.
For (x, y) ∈ X × X , the value hR (x, y) is interpreted as the set of plausible truth values for the
statement “x is R-related to y ”, and hU (y) for “y ∈ U ”:
hR (x, y) = {1} (definitely related),
hR (x, y) = {0} (definitely not related),
hR (x, y) = {0, 1} (hesitant/unde


# Page. 31

![Page Image](https://bcdn.docswell.com/page/3JK95WMLJD.jpg)

Chapter 2. Types of Rough Set
30
and similarly for hU (y).
(H0) Definite reflexivity (nondegeneracy). Assume
hR (x, x) = {1}
(∀x ∈ X).
This guarantees that every object is definitely related to itself, preventing vacuous membership
in the lower approximation from empty definite neighborhoods.
Define the definite and possible parts:
Rdef := {(x, y) ∈ X × X | hR (x, y) = {1}},
Rpos := {(x, y) ∈ X × X | 1 ∈ hR (x, y)},
Udef := {y ∈ X | hU (y) = {1}},
Upos := {y ∈ X | 1 ∈ hU (y)}.
For each x ∈ X , define the definite/possible neighborhoods
Ndef (x) := {y ∈ X | (x, y) ∈ Rdef },
Npos (x) := {y ∈ X | (x, y) ∈ Rpos }.
The triple (X, hR , hU ) is called a HesiRough set (or HesiRough approximation space) when the
lower and upper approximations of the hesitant target are defined by
U H := {x ∈ X | Ndef (x) ⊆ Udef },
U
H
:= {x ∈ X | Npos (x) ∩ Upos 6= ∅}.
The induced regions are
POSH (U ) := U H ,
BNDH (U ) := U
H
\ U H,
H
NEGH (U ) := X \ U .
Example 2.12.2 (HesiRough set for e-commerce fraud screening under hesitant evidence). Let
X = {t1 , t2 , t3 , t4 , t5 } be five online transactions. We model a hesitant “fraud” target using
set-valued predicates hR (transaction linkage) and hU (fraud label), as in Definition 2.12.1.
Hesitant relation predicate hR . Assume the nondegeneracy condition (H0):
hR (x, x) = {1}
(∀x ∈ X).
For distinct transactions, define:
hR (t1 , t2 ) = hR (t2 , t1 ) = {1} (same stolen card and same device ID; definite link),
hR (t4 , t5 ) = hR (t5 , t4 ) = {1} (same legitimate subscriber account; definite link),
hR (t1 , t3 ) = hR (t3 , t1 ) = {0, 1},
hR (t2 , t3 ) = hR (t3 , t2 ) = {0, 1}
(shared IP/shipping region; link plausible but unconfirmed),
and for all remaining unordered pairs {x, y} not listed above, set
hR (x, y) = {0} (definitely not linked).
This yields the definite/possible relations
Rdef = {(x, y) | hR (x, y) = {1}},
so Rdef ⊆ Rpos .
Rpos = {(x, y) | 1 ∈ hR (x, y)},


# Page. 32

![Page Image](https://bcdn.docswell.com/page/LE3WK1Y6E5.jpg)

31
Chapter 2. Types of Rough Set
Hesitant target predicate hU . Let
hU (t1 ) = {1},
hU (t2 ) = {1} (chargeback confirmed),
hU (t3 ) = {0, 1} (manual review pending; fraud uncertain),
hU (t5 ) = {0} (delivered and verified; non-fraud).
hU (t4 ) = {0},
Then
Udef = {y ∈ X | hU (y) = {1}} = {t1 , t2 },
Upos = {y ∈ X | 1 ∈ hU (y)} = {t1 , t2 , t3 }.
Definite/possible neighborhoods. From the above,
Ndef (t1 ) = Ndef (t2 ) = {t1 , t2 },
Ndef (t3 ) = {t3 },
Npos (t1 ) = Npos (t2 ) = Npos (t3 ) = {t1 , t2 , t3 },
Ndef (t4 ) = Ndef (t5 ) = {t4 , t5 },
Npos (t4 ) = Npos (t5 ) = {t4 , t5 }.
HesiRough approximations and regions. The HesiRough lower/upper approximations are
U H = {x ∈ X | Ndef (x) ⊆ Udef } = {t1 , t2 },
U
H
= {x ∈ X | Npos (x) ∩ Upos 6= ∅} = {t1 , t2 , t3 }.
Hence the induced regions are
POSH (U ) = {t1 , t2 },
BNDH (U ) = {t3 },
NEGH (U ) = {t4 , t5 }.
Interpretation: t1 , t2 are definitely fraudulent (their definite neighborhoods stay within confirmed
fraud), t3 is boundary (only possible linkage/label evidence), and t4 , t5 are negative since even
their possible neighborhoods do not intersect the possibly-fraudulent set.
Theorem 2.12.3 (Well-definedness and basic inclusions). Under the assumptions of DefiniH
tion 2.12.1, the sets U H and U are well-defined subsets of X . Moreover,
UH ⊆U
H
⊆ X,
and the regions POSH (U ), BNDH (U ), and NEGH (U ) are well-defined and form a disjoint cover
of X :
X = POSH (U ) ∪˙ BNDH (U ) ∪˙ NEGH (U ).
Proof. (Well-definedness). Since hR and hU are ordinary functions with codomain P({0, 1}) \
{∅}, the conditions hR (x, y) = {1} and 1 ∈ hR (x, y) define subsets Rdef , Rpos ⊆ X × X ,
and hU (y) = {1} and 1 ∈ hU (y) define subsets Udef , Upos ⊆ X . Hence for each x ∈ X , the
neighborhoods Ndef (x), Npos (x) ⊆ X are well-defined. Therefore the defining predicates
x ∈ U H ⇐⇒ Ndef (x) ⊆ Udef ,
are meaningful, so U H , U
H
H
x∈U
H
⇐⇒ Npos (x) ∩ Upos 6= ∅
⊆ X are well-defined.
(Inclusion U H ⊆ U ). First note that Rdef ⊆ Rpos and Udef ⊆ Upos , hence Ndef (x) ⊆ Npos (x)
for all x ∈ X . Let x ∈ U H . Then Ndef (x) ⊆ Udef . By (H0), (x, x) ∈ Rdef , so x ∈ Ndef (x), hence


# Page. 33

![Page Image](https://bcdn.docswell.com/page/8EDK3X5M7G.jpg)

Chapter 2. Types of Rough Set
32
x ∈ Udef ⊆ Upos . Also (x, x) ∈ Rdef ⊆ Rpos implies x ∈ Npos (x). Therefore x ∈ Npos (x) ∩ Upos ,
H
so Npos (x) ∩ Upos 6= ∅ and thus x ∈ U .
(Region decomposition). By definition,
POSH (U ) = U H ,
BNDH (U ) = U
H
H
NEGH (U ) = X \ U .
\ U H,
H
These are well-defined. Using U H ⊆ U , the three sets are pairwise disjoint. Finally,
U H ∪ (U
H
H
\ U H ) ∪ (X \ U ) = X,
so they form a disjoint cover of X .
2.13 GraphicRough Set
Graphic rough sets define lower/upper approximations of vertex subsets using graph neighborhoods or reachability, capturing uncertainty in network classification tasks [81].
Definition 2.13.1 (GraphicRough Set induced by an attribute graph). [81] Let U be a
nonempty finite universe of objects and let V be a finite set of attributes. Let G = (V, E)
be an (undirected) graph on V describing interrelationships among attributes. Assume that to
each attribute v ∈ V we associate an equivalence relation Rv ⊆ U × U (indiscernibility with
respect to v ).
Let Sub(G)
 denote the family of all subgraphs H = (VH , EH ) of G (with VH ⊆ V and EH ⊆
E ∩ V2H ). For each H ∈ Sub(G) define the combined equivalence relation
\
RH :=
Rv .
v∈VH
For any target set X ⊆ U , define the H -lower and H -upper approximations by
X H := { x ∈ U | [x]RH ⊆ X },
X H := { x ∈ U | [x]RH ∩ X 6= ∅ },
where [x]RH := { y ∈ U | (x, y) ∈ RH }. The GraphicRough Set of X (induced by G) is the
mapping

FX : Sub(G) −→ P(U ) × P(U ),
FX (H) := X H , X H .
Example 2.13.2 (Credit screening as a GraphicRough Set). Consider a small loan–application
scenario.
Objects. Let
U = {p1 , p2 , p3 , p4 , p5 , p6 }
be six loan applicants.
Attributes and their dependency graph. Let the attribute set be
V = {Inc, Debt, Cred},


# Page. 34

![Page Image](https://bcdn.docswell.com/page/V7PK4PGQJ8.jpg)

33
Chapter 2. Types of Rough Set
where Inc = income band, Debt = debt level, and Cred = credit history band. Assume the
attribute graph G = (V, E) is the path

E = {Inc, Debt}, {Debt, Cred} ,
encoding that Debt interacts with both income and credit in the screening process.
Attribute values (coarse categories).
Applicant Inc Debt Cred
p1
H
L
G
p2
H
L
F
p3
H
H
P
p4
L
H
P
p5
L
L
F
p6
L
L
G
Equivalence relations per attribute. For each v ∈ V , define Rv ⊆ U × U by
(pi , pj ) ∈ Rv
⇐⇒
pi and pj have the same value on attribute v .
For instance, RDebt has two classes {p1 , p2 , p5 , p6 } (low debt) and {p3 , p4 } (high debt).
Target concept. Let
X = {p1 , p2 , p6 } ⊆ U
be the set of applicants the bank intends to approve (based on additional external checks).
GraphicRough approximations under subgraphs. Let H be the subgraph induced by
{Inc, Debt} (so RH = RInc ∩RDebt ). Then the RH -classes are determined by the pair (Inc, Debt):
{p1 , p2 }, {p3 }, {p4 }, {p5 , p6 }.
Hence,
X H = {p ∈ U | [p]RH ⊆ X} = {p1 , p2 },
because [p1 ]RH = [p2 ]RH = {p1 , p2 } ⊆ X , while [p6 ]RH = {p5 , p6 } * X . Moreover,
X H = {p ∈ U | [p]RH ∩ X 6= ∅} = {p1 , p2 , p5 , p6 },
since the class {p5 , p6 } intersects X via p6 .
Now take the full-attribute subgraph H 0 := G with VH 0 = {Inc, Debt, Cred}. Then RH 0 =
RInc ∩ RDebt ∩ RCred yields singleton classes in this toy data, so the approximation becomes
crisp:
X H0 = X = X H0 .
Interpretation. The GraphicRough Set map
FX : Sub(G) → P(U ) × P(U ),
FX (H) = (X H , X H ),
encodes how the “definitely-approve” and “possibly-approve” applicants change as we move
across different attribute subgraphs (i.e., different dependency-aware combinations of attributes).


# Page. 35

![Page Image](https://bcdn.docswell.com/page/2JVVX26PJQ.jpg)

Chapter 2. Types of Rough Set
34
2.14 ClusterRough Set
Cluster rough set uses clustering-induced granules instead of equivalence classes; lower/upper
approximations aggregate clusters fully inside or intersecting target subset [81].
Definition 2.14.1 (ClusterRough Set induced by a clustering of attributes). [81] Let U be a
nonempty finite universe, V a finite attribute set, and {Rv }v∈V equivalence relations on U as
above. Let C = {C1 , . . . , Ck } be a partition (clustering) of V into nonempty clusters.
For each cluster Cj ∈ C define the combined equivalence relation
\
RCj :=
Rv .
v∈Cj
Given X ⊆ U , define the cluster-wise lower/upper approximations by
X Cj := { x ∈ U | [x]RCj ⊆ X },
X Cj := { x ∈ U | [x]RCj ∩ X 6= ∅ }.
The ClusterRough Set of X (with respect to C ) is the mapping

GX (Cj ) := X Cj , X Cj .
GX : C −→ P(U ) × P(U ),
Example 2.14.2 (Credit-risk assessment via clustered attributes).
ClusterRough sets replace
T
single-attribute granules by clusters of attributes, using RCj = v∈Cj Rv and cluster-wise Pawlak

lower/upper approximations X Cj , X Cj for each cluster Cj (see the formal definition in [81]).
Let U = {a1 , a2 , a3 , a4 , a5 , a6 } be six loan applicants. Consider four discretized attributes
V = {Inc, Debt, Late, Score},
where Inc ∈ {H, L} (high/low income), Debt ∈ {H, L} (high/low debt ratio), Late ∈ {Y, N } (recent late payments yes/no), and Score ∈ {G, P } (good/poor credit score band). Each attribute
v ∈ V induces an equivalence relation
x Rv y
⇐⇒
fv (x) = fv (y),
where fv : U → Dom(v) records the (discretized) value of applicant x on attribute v .
Assume the applicants have the following profiles:
a1
a2
a3
a4
a5
a6
Inc Debt Late Score
H
L
N
G
H
L
N
G
H
H
Y
P
L
H
Y
P
L
L
N
G
L
L
Y
P
We cluster the attributes into two groups:
C = {C1 , C2 },
C1 = {Inc, Debt} (financial cluster),
C2 = {Late, Score} (credit-history cluster).


# Page. 36

![Page Image](https://bcdn.docswell.com/page/5EGLVRNQJL.jpg)

35
Chapter 2. Types of Rough Set
For each cluster Cj ∈ C define RCj =
T
v∈Cj Rv .
Let X ⊆ U be the target set of high default-risk applicants:
X = {a3 , a4 , a6 }.
(1) Financial cluster C1 = {Inc, Debt}. The RC1 -equivalence classes (same income bucket
and debt bucket) are
[a1 ]RC1 = {a1 , a2 },
[a3 ]RC1 = {a3 },
[a4 ]RC1 = {a4 },
[a5 ]RC1 = {a5 , a6 }.
Hence the cluster-wise approximations are
X C1 = {x ∈ U | [x]RC1 ⊆ X} = {a3 , a4 },
X C1 = {x ∈ U | [x]RC1 ∩ X 6= ∅} = {a3 , a4 , a5 , a6 }.
Interpretation: using only financial attributes, a6 is not certainly high-risk because it shares the
same financial profile with a5 ∈
/ X (a boundary effect).
(2) Credit-history cluster C2 = {Late, Score}. The RC2 -equivalence classes (same latepayment and score band) are
[a1 ]RC2 = {a1 , a2 , a5 } (Late = N, Score = G),
[a3 ]RC2 = {a3 , a4 , a6 } (Late = Y, Score = P ).
Therefore
X C2 = {a3 , a4 , a6 } = X,
X C2 = {a3 , a4 , a6 } = X.
Interpretation: the credit-history cluster perfectly isolates the high-risk group in this toy dataset.
Finally, the ClusterRough representation of X is the mapping
GX : C → P(U ) × P(U ),
GX (C1 ) = ({a3 , a4 }, {a3 , a4 , a5 , a6 }), GX (C2 ) = (X, X).
2.15 Multipolar Rough Set
A multipolar rough set applies Pawlak approximations to several overlapping poles, yielding an
m-tuple of lower–upper pairs under one relation. Related concepts with a similar structural flavor
include multipolar fuzzy sets [82], multipolar neutrosophic sets [83–85], and multipolar plithogenic
sets [86].
Definition 2.15.1 (Multipolar Rough Set). Let U 6= ∅ be a universe and let R ⊆ U × U be
an equivalence relation. Fix an integer m ≥ 2 and let X1 , . . . , Xm ⊆ U be m (not necessarily
disjoint) subsets, interpreted as m distinct evaluative “poles”. For each i ∈ {1, . . . , m}, define
the Pawlak approximations
Xi := R(Xi ),
Xi := R(Xi ).
The multipolar rough set determined by (X1 , . . . , Xm ) (under R) is the m-tuple

MRS(X1 , . . . , Xm ) := (X1 , X1 ), (X2 , X2 ), . . . , (Xm , Xm ) .


# Page. 37

![Page Image](https://bcdn.docswell.com/page/4JQY6VQW7P.jpg)

Chapter 2. Types of Rough Set
36
Example 2.15.2 (Multipolar rough set in clinical triage (differential diagnosis)). Consider an
emergency clinic that performs a fast initial screening for each arriving patient. Let
U = {p1 , p2 , . . . , pn }
be the set of patients seen in one day, and let the screening record be the binary feature vector

ϕ(p) := Fever(p), Cough(p), SpO2 Low(p), Travel(p) ∈ {0, 1}4 .
Define an indiscernibility relation (equivalence relation) on U by
(p, q) ∈ R
⇐⇒
ϕ(p) = ϕ(q),
so that [p]R collects all patients with the same observable screening pattern.
In practice, clinicians may attach multiple simultaneous tentative labels (a differential diagnosis),
so we consider several possibly overlapping “poles”:
X1 := {patients suspected of influenza},
X2 := {patients suspected of COVID-19},
X3 := {patients suspected of bacterial pneumonia}.
Overlaps Xi ∩ Xj 6= ∅ naturally occur because the same patient can be suspected of multiple
conditions before confirmatory tests.
For each pole Xi , compute Pawlak lower/upper approximations under the same R:
Xi := R(Xi ) = {p ∈ U | [p]R ⊆ Xi },
Xi := R(Xi ) = {p ∈ U | [p]R ∩ Xi 6= ∅}.
Interpretation:
• Xi are patients definitely in pole i given screening granules (everyone with the same screening pattern is labeled i).
• Xi are patients possibly in pole i (at least one patient with the same pattern is labeled i).
Thus the clinical triage output can be represented as the 3-polar rough object

MRS(X1 , X2 , X3 ) = (X1 , X1 ), (X2 , X2 ), (X3 , X3 ) ,
which simultaneously captures “definitely/possibly influenza”, “definitely/possibly COVID-19”,
and “definitely/possibly pneumonia” under the same screening-based indiscernibility.


# Page. 38

![Page Image](https://bcdn.docswell.com/page/K74W4MN1E1.jpg)

37
Chapter 2. Types of Rough Set
2.16 Bipartite Rough Set
Bipartite rough sets use a bipartite relation between two universes; approximations derive from
cross-neighborhoods, enabling explicit granular two-sided uncertainty analysis.
Definition 2.16.1 (Bipartite Rough Set). Let U 6= ∅ be a universe and let R ⊆ U × U be an
equivalence relation. Let A and B be two disjoint sets of attribute values (two “parts”) and set
the parameter domain
J := A × B.
A bipartite rough set (under R) is a pair (F, J) where F : J → P(U ) is a mapping. For each
(a, b) ∈ J , define the lower and upper approximations of F (a, b) by

F (a, b) := R F (a, b) ,

F (a, b) := R F (a, b) .

Thus, each parameter (a, b) induces the rough approximation pair F (a, b), F (a, b) .
Example 2.16.2 (Bipartite rough customer segments for a credit-card campaign). Let U be the
set of credit-card applicants in a given month. Let C = {IncomeBracket, EmploymentType, PastDefaultFlag}
be condition attributes, and define the indiscernibility (equivalence) relation R ⊆ U × U by
(x, y) ∈ R ⇐⇒ ∀a ∈ C, fa (x) = fa (y),
so that [x]R collects applicants with the same risk-profile summary.
Let the two disjoint “parts” of attribute values be
A = {Young, Middle, Senior} (age group),
B = {Urban, Suburban, Rural} (residential zone),
and set J := A × B .
For each (a, b) ∈ J , define F (a, b) ⊆ U as the set of applicants whose recorded age group is
a, whose zone is b, and who clicked the bank’s “premium card” offer (a behavioral signal of
interest). Then (F, J) is a bipartite rough set (under R), and for each (a, b) ∈ J we form

F (a, b) := R F (a, b) = {x ∈ U | [x]R ⊆ F (a, b)},

F (a, b) := R F (a, b) = {x ∈ U | [x]R ∩ F (a, b) 6= ∅}.
Interpretation: F (a, b) contains applicants definitely belonging to segment (a, b) (all applicants
with the same risk-profile also show interest and fall in the same part-values), whereas F (a, b)
contains applicants who possibly belong to (a, b). In practice, the bank can auto-target F (a, b)
for a low-risk marketing action, and send the boundary F (a, b) \ F (a, b) to manual review or to
a softer offer.


# Page. 39

![Page Image](https://bcdn.docswell.com/page/LJ1Y4855EG.jpg)

Chapter 2. Types of Rough Set
38
2.17 TreeRough Set
A TreeRough set extends classical rough set approximations by indexing them with a fixed
hierarchical (tree-structured) attribute system [44]. Intuitively, instead of working with a single
indiscernibility relation, we consider a family of equivalence relations attached to the nodes of
an attribute tree. A related concept with a similar hierarchical structure is the TreeSoft set (and
its variants) [87–89]. The formal definition is given below.
Definition 2.17.1 (TreeRough set). Let U be a nonempty (finite) universe, and let Tree(A) be
a fixed rooted tree whose nodes represent attributes. Denote by V (Tree(A)) the set of all nodes
of Tree(A). Assume that each node a ∈ V (Tree(A)) is equipped with an equivalence relation
Ra ⊆ U × U,
and write the Ra -equivalence class of x ∈ U as
[x]Ra := { y ∈ U | (x, y) ∈ Ra }.
For any subset X ⊆ U and any node a ∈ V (Tree(A)), define the lower and upper approximations
of X with respect to Ra by


X a := x ∈ U [x]Ra ⊆ X ,
X a := x ∈ U [x]Ra ∩ X 6= ∅ .
The TreeRough set (tree-indexed rough approximation) of X is the collection
n
o

TR(X) :=
X a, X a
a ∈ V (Tree(A)) .
Example 2.17.2 (TreeRough set for hierarchical medical triage). Let U = {p1 , p2 , p3 , p4 , p5 , p6 }
be a set of patients. Consider a rooted attribute tree Tree(A) whose nodes represent clinical
attributes at different granularities:
(
aS = Symptoms,
Clinical (root) −→
aI = Imaging.
We define, for each node a ∈ V (Tree(A)), an equivalence relation Ra ⊆ U × U (so that
(x, y) ∈ Ra means that x and y are indiscernible with respect to the attribute(s) at a), as
in the TreeRough framework.
Data. Assume the following observed values:
patient Fever Cough X-ray
p1
H
Y
I
p2
H
Y
I
p3
N
Y
I
p4
H
N
C
p5
N
Y
C
p6
N
N
C
where H/N means high/normal, Y /N means yes/no, and I/C means infiltrate/clear.


# Page. 40

![Page Image](https://bcdn.docswell.com/page/GJWGXZ5W72.jpg)

39
Chapter 2. Types of Rough Set
Node relations. Define RaS by equality of the symptom pair (Fever, Cough), and define RaI
by equality of the X-ray result:
(x, y) ∈ RaS ⇐⇒ (Fever(x), Cough(x)) = (Fever(y), Cough(y)),
(x, y) ∈ RaI ⇐⇒ Xray(x) = Xray(y).
Hence the corresponding equivalence classes are
[p1 ]RaS = [p2 ]RaS = {p1 , p2 },
[p3 ]RaS = [p5 ]RaS = {p3 , p5 },
[p4 ]RaS = {p4 },
[p6 ]RaS = {p6 },
and
[p1 ]RaI = [p2 ]RaI = [p3 ]RaI = {p1 , p2 , p3 },
[p4 ]RaI = [p5 ]RaI = [p6 ]RaI = {p4 , p5 , p6 }.
Target concept. Let X ⊆ U be the set of patients who truly have pneumonia:
X = {p1 , p2 , p3 }.
Tree-indexed rough approximations. At the node aS (Symptoms), the Pawlak lower/upper
approximations are
X aS = {x ∈ U | [x]RaS ⊆ X} = {p1 , p2 },
X aS = {x ∈ U | [x]RaS ∩X 6= ∅} = {p1 , p2 , p3 , p5 }.
Thus p5 lies in the boundary at the symptom level (shares symptoms with p3 but is not pneumonia).
At the node aI (Imaging), we obtain
X aI = {x ∈ U | [x]RaI ⊆ X} = {p1 , p2 , p3 },
X aI = {x ∈ U | [x]RaI ∩X 6= ∅} = {p1 , p2 , p3 }.
So the imaging node yields an exact (crisp) description of X in this toy dataset.
Interpretation. The TreeRough set TR(X) collects these approximation pairs for each node
in the attribute tree, enabling a hierarchical view: coarse screening at aS and refined certainty
at aI .
2.18 ForestRough Set
Forest rough set computes multiple lower/upper approximations across a forest of attribute trees,
aggregating tree-wise rough descriptions for robust decisions. A related concept with a similar
hierarchical structure is the ForestSoft set (and its variants) [90, 91].
Definition 2.18.1 (ForestRough Set). Let U be a nonempty universe. Let T be an index set
and, for each t ∈ T , let Tree(A(t) ) be an attribute tree. Define the (disjoint) forest of attributes
by
G

Forest {A(t) }t∈T :=
Tree(A(t) ),
t∈T


# Page. 41

![Page Image](https://bcdn.docswell.com/page/4EZL61N273.jpg)

Chapter 2. Types of Rough Set
40
and assume that each node (attribute) a ∈ Forest({A(t) }t∈T ) is equipped with an equivalence
relation Ra ⊆ U × U on U .
For x ∈ U , write [x]Ra := { y ∈ U | (x, y) ∈ Ra }. For any X ⊆ U and any attribute-node
a ∈ Forest({A(t) }t∈T ), define the (lower/upper) rough approximations of X w.r.t. Ra by
X a := { x ∈ U | [x]Ra ⊆ X },
X a := { x ∈ U | [x]Ra ∩ X 6= ∅ }.
The ForestRough Set induced by the forest Forest({A(t) }t∈T ) is the mapping


FR : P(U ) −→ P P(U ) × P(U ) ,
FR(X) := (X a , X a ) a ∈ Forest({A(t) }t∈T ) .
Equivalently, if each tree yields the TreeRough collection
T Rt (X) := { (X a , X a ) | a ∈ Tree(A(t) ) },
then
FR(X) =
[
T Rt (X).
t∈T
Example 2.18.2 (ForestRough set for hospital triage with multiple attribute trees). Consider
an emergency department (ED) triage task. Let U be the finite set of patients who arrived
during a fixed period (e.g., one month). We model the available clinical information by a forest
of attribute trees, as in the ForestRough construction.
Universe and target concept. Let
U = {patients in the ED dataset},
X⊆U
where X is the set of patients who were later confirmed (after full workup) to have bacterial
pneumonia requiring antibiotics. At triage time, X is not directly observable, so we approximate
it.
A forest of attribute trees. Let T := {sym, lab, vit} index three separate attribute trees:
• Tree(A(sym) ): symptoms hierarchy (e.g., respiratory → cough/dyspnea, systemic → chills,
etc.),
• Tree(A(lab) ): laboratory hierarchy (e.g., inflammation → CRP/WBC/procalcitonin bins),
• Tree(A(vit) ): vital-signs hierarchy (e.g., temperature/SpO2 /respiratory-rate bins).
Form the disjoint forest of attributes
G

Forest {A(t) }t∈T =
Tree(A(t) ).
t∈T


# Page. 42

![Page Image](https://bcdn.docswell.com/page/Y76W2L967V.jpg)

41
Chapter 2. Types of Rough Set
Equivalence relations at nodes. For each attribute-node a in the forest, define an equivalence
relation Ra ⊆ U × U by
(x, y) ∈ Ra
⇐⇒
x and y fall in the same discretized category at node a (same bin).
For instance, at the node “high fever” in Tree(A(vit) ), two patients are Ra -equivalent if their
temperatures both lie in the bin [38.5◦ C, ∞).
ForestRough approximations (triage interpretation). For each node a ∈ Forest({A(t) }t∈T ),
compute
X a = {x ∈ U | [x]Ra ⊆ X},
X a = {x ∈ U | [x]Ra ∩ X 6= ∅}.
Interpretation:
• x ∈ X a means that, within the granule defined by a, every patient indiscernible from x
(under Ra ) ended up in X ; thus x is definitely high-risk under that attribute resolution.
• x ∈ X a \ X a means the evidence at node a is ambiguous (boundary under a).
• x∈
/ X a means no patient in x’s Ra -granule belongs to X , so x is definitely not in X under
that node.
The ForestRough description aggregates these rough views across all nodes in all trees:
[

FR(X) = (X a , X a ) | a ∈ Forest({A(t) }t∈T ) =
T Rt (X),
t∈T
so triage decisions can be justified at different clinical granularities (symptoms vs labs vs vitals)
and at multiple levels inside each hierarchy.
2.19 Dynamic Rough Set
Dynamic rough sets model time-varying knowledge by allowing the underlying approximation
space (and optionally the target concept) to evolve over time [92–95].
Definition 2.19.1 (Dynamic approximation space and Dynamic Rough Set). Let U be a
nonempty (typically finite) universe and let T be a time index set (e.g., T = N or {1, 2, . . . , T }).
A dynamic approximation space is a family

A = (U, Rt ) t∈T ,
where for each t ∈ T, Rt ⊆ U × U is an equivalence relation. For x ∈ U , write
[x]Rt := { y ∈ U | (x, y) ∈ Rt }
for the Rt -equivalence class of x at time t.
(A) Fixed concept, evolving knowledge. Given a fixed target concept X ⊆ U , the time-t
lower and time-t upper approximations of X are
aprt (X) := { x ∈ U | [x]Rt ⊆ X },
aprt (X) := { x ∈ U | [x]Rt ∩ X 6= ∅ }.


# Page. 43

![Page Image](https://bcdn.docswell.com/page/G75M21N574.jpg)

Chapter 2. Types of Rough Set
42
The dynamic rough set of X (with respect to A) is the time-indexed family


DRSA (X) := aprt (X), aprt (X) t∈T .
Equivalently, one may regard DRSA (X) as a mapping t 7→ (aprt (X), aprt (X)).
(B) Evolving concept (optional generalization). If the target concept itself varies with
time, i.e., Xt ⊆ U for each t ∈ T, define


DRSA (X• ) := aprt (Xt ), aprt (Xt ) t∈T .
In either case, the induced positive, boundary, and negative regions at time t are
POSt (X) := aprt (X),
BNDt (X) := aprt (X) \ aprt (X),
NEGt (X) := U \ aprt (X).
Remark 2.19.2 (Dynamic information systems (common source of Rt )). Often Rt is induced
by a time-dependent information system St = (U, AT, ft ). For a fixed attribute subset P ⊆ AT ,
one sets
(x, y) ∈ INDt (P ) ⇐⇒ ft (x, a) = ft (y, a) for all a ∈ P,
and uses Rt = INDt (P ) in the above definition.
Example 2.19.3 (Weekly merchant-risk monitoring as a Dynamic Rough Set). A payment
processor reassesses merchants every week as new chargeback data arrive. Let
U = {m1 , m2 , m3 , m4 , m5 }
and
T = {1, 2}
represent five merchants observed at week t = 1 and week t = 2.
Time-dependent knowledge (equivalence) relations. At each week t, the processor discretizes (bins) observable features such as
P = {country, industry, chargeback_bucket}
into categorical values, and records them via a feature map ft : U → V (where V is the finite
product of the bins). Define the indiscernibility (equivalence) relation
(x, y) ∈ Rt
⇐⇒
ft (x) = ft (y)
(x, y ∈ U ).
Thus, (U, Rt )t∈T is a dynamic approximation space.
Target concept. Let X ⊆ U be the set of merchants confirmed as high-risk by investigators:
X = {m1 , m4 }.
Week 1 (coarse evidence). Suppose the week-1 bins are coarse, yielding the partition
[m1 ]R1 = [m2 ]R1 = [m3 ]R1 = {m1 , m2 , m3 },
[m4 ]R1 = [m5 ]R1 = {m4 , m5 }.


# Page. 44

![Page Image](https://bcdn.docswell.com/page/9J29415GER.jpg)

43
Chapter 2. Types of Rough Set
Then the time-1 lower/upper approximations of X are
apr1 (X) = {x ∈ U | [x]R1 ⊆ X} = ∅,
apr1 (X) = {x ∈ U | [x]R1 ∩ X 6= ∅} = U.
Interpretation: with coarse features, no merchant is definitely high-risk, while all are possibly
high-risk because each coarse class contains at least one confirmed high-risk merchant.
Week 2 (refined evidence). After an additional week, more data (e.g., a refined chargeback
bucket) splits the classes:
[m1 ]R2 = {m1 },
Hence
[m2 ]R2 = [m3 ]R2 = {m2 , m3 },
apr2 (X) = {m1 , m4 },
[m4 ]R2 = {m4 },
[m5 ]R2 = {m5 }.
apr2 (X) = {m1 , m4 }.
Interpretation: once the knowledge granules become finer, m1 and m4 become definitely highrisk, and m2 , m3 , m5 become definitely not high-risk (at week 2).
Dynamic rough set view. The dynamic rough set of X over T = {1, 2} is the time-indexed
family
n
o

o n
DRS(X) = apr1 (X), apr1 (X) , apr2 (X), apr2 (X) = (∅, U ), ({m1 , m4 }, {m1 , m4 }) .
2.20 L-valued rough sets
L-valued rough sets replace [0, 1] with a lattice L for membership, defining approximations via
L-relations and residuated operations systematically throughout [96–99]. As a concept with a
similar structure, lattice-valued fuzzy sets are also well known [100–102].
Definition 2.20.1 (GL-quantale (degree structure)). Let (L, ∧, ∨, 0, 1) be a complete lattice,
and let : L × L →
associative binary operation such that α 1 = α for
WL be a commutative,
W
all α ∈ L, and α
βj ) for all α ∈ L and families {βj }j∈J ⊆ L. Define the
j∈J βj =
j∈J (α
residuated implication ⇒: L × L → L by
_
α ⇒ β :=
{γ ∈ L | α γ ≤ β}.
We call (L, ) a GL-quantale if it additionally satisfies
α∧β = α
(α ⇒ β)
(α, β ∈ L).
Definition 2.20.2 (L-universe and L-powerset). Let X be a nonempty set and let U : X → L
be a fixed L-set (called an L-universe on X ). An L-set Q : X → L is called an L-subset in U if
Q(x) ≤ U (x) for all x ∈ X . The family of all L-subsets of U is denoted by
P (U ) := { Q ∈ LX | Q ≤ U },
and is called the L-powerset of U .


# Page. 45

![Page Image](https://bcdn.docswell.com/page/DEY4MZKGJM.jpg)

Chapter 2. Types of Rough Set
44
Definition 2.20.3 (L-valued approximation space). Let U be an L-universe on X . A mapping
R : X × X → L is called an L-valued relation on U if
R(x, y) ≤ U (x) ∧ U (y)
(x, y ∈ X).
The pair (U, R) is called an L-valued approximation space.
Definition 2.20.4 (L-valued rough approximation operators and L-valued rough set). Let
(U, R) be an L-valued approximation space and let Q ∈ P (U ). Define mappings (operators)
R∗ , R∗ : P (U ) → P (U ) by, for each x ∈ X ,
^

R∗ (Q)(x) :=
U (x)
R(y, x) ⇒ Q(y) ,
y∈X
_
R(y, x)
R∗ (Q)(x) :=
U (y) ⇒ Q(y)

.
y∈X
Then R∗ (Q) is called the L-valued lower rough approximation of Q, R∗ (Q) is called the L-valued
upper rough approximation of Q, and the pair

R∗ (Q), R∗ (Q)
is called the L-valued rough set (rough approximation) of Q in (U, R).
Example 2.20.5 (L-valued rough set for linguistic credit-risk screening). A bank performs an
early credit-risk triage using qualitative grades rather than precise probabilities.
(1) Degree lattice. Let
n
o
L = 0, 12 , 1
with 0 ≤ 12 ≤ 1,
interpreted as {Low, Medium, High}. Use the Gödel (min) product
α
β := min{α, β},
and Gödel residuum
(
α ⇒ β :=
1, α ≤ β,
β, α &gt; β.
(2) Universe and an L-valued relation (similarity). Let X = {u1 , u2 , u3 } be three loan
applicants. Take the L-universe U : X → L to be constant U (ui ) = 1 (all applicants are fully
present).
Define an L-valued similarity relation R : X × X → L by
R(·, ·) u1 u2 u3
u1
1 12 0
1
u2
1 12
2
u3
0 12 1
where R(ui , uj ) = 1 means “very similar credit profiles”, R(ui , uj ) = 12 means “moderately
similar”, and R(ui , uj ) = 0 means “dissimilar”.


# Page. 46

![Page Image](https://bcdn.docswell.com/page/VJNYW3R878.jpg)

45
Chapter 2. Types of Rough Set
(3) L-subset (linguistic risk concept). Let Q ∈ LX represent the analysts’ preliminary
(linguistic) judgment of the concept “likely to default”:
Q(u1 ) = 12 ,
Q(u2 ) = 1,
Q(u3 ) = 0.
(4) L-valued lower/upper approximations. With U (·) = 1, the general operators reduce
to
^
_


R∗ (Q)(x) =
R(y, x) ⇒ Q(y) ,
R∗ (Q)(x) =
R(y, x) Q(y) .
y∈X
y∈X
A direct computation yields:
x R∗ (Q)(x) R∗ (Q)(x)
1
1
u1
2
2
u2
0
1
1
u3
0
2
Interpretation. Applicant u1 is definitely medium-risk ( 12 ) given the similarity structure. Applicant u2 is possibly high-risk (upper = 1) but not definitely so (lower = 0), reflecting conflicting
evidence from similar medium/low-risk profiles. Applicant u3 is not definitely risky (lower = 0)
but is possibly medium-risk (upper = 12 ) because u3 is moderately similar to the high-risk applicant u2 .
2.21 Graded Rough Set
Graded rough sets relax strict inclusion by requiring each equivalence class overlap a target set
by at least k elements [103–106].
Definition 2.21.1 (Graded rough approximations and graded rough set). Let U be a nonempty
finite universe and let R ⊆ U × U be an equivalence relation. For x ∈ U , write
[x]R := { y ∈ U | (x, y) ∈ R }
for the R-equivalence class (granule) of x. Fix a nonnegative integer k ∈ N0 , called the grade.
For any A ⊆ U , the grade-k R upper and grade-k R lower approximations of A are defined by
o
[n
aprRk (A) :=
[x]R
|[x]R ∩ A| &gt; k ,
aprRk (A) :=
[n
[x]R
|[x]R \ A| ≤ k
o
=
[n
[x]R
o
|[x]R | − |[x]R ∩ A| ≤ k .

The pair aprRk (A), aprRk (A) is called the graded (grade-k ) rough approximation of A. If
aprRk (A) = aprRk (A), then A is R-definable by grade k ; otherwise, A is called a graded (grade-k )
rough set (with respect to R).


# Page. 47

![Page Image](https://bcdn.docswell.com/page/YE9PX9MXJ3.jpg)

Chapter 2. Types of Rough Set
46
Example 2.21.2 (Graded rough set in clinical triage (allowing up to one exception)). Consider
an emergency department that groups patients by a coarse symptom profile (e.g., high fever &amp;
cough, moderate fever &amp; myalgia, low fever/no cough). Let
U = {p1 , p2 , p3 , p4 , p5 , p6 , p7 , p8 , p9 }
be the patients arriving today, and let R be the equivalence relation
(pi , pj ) ∈ R
⇐⇒
pi and pj have the same symptom profile.
Assume the induced equivalence classes are
C1 = {p1 , p2 , p3 } (high fever &amp; cough),
C2 = {p4 , p5 , p6 , p7 } (moderate fever &amp; myalgia),
C3 = {p8 , p9 } (low fever/no cough).
Let A ⊆ U be the set of patients who truly have influenza (confirmed later by PCR):
A = {p1 , p2 , p3 , p4 , p5 }.
Fix grade k = 1 (we tolerate at most one “exception” in a class when declaring definite membership). Then the grade-1 lower approximation is
aprR1 (A)
=
[n
[x]R
o
|[x]R \ A| ≤ 1 = C1 ,
because |C1 \ A| = 0 ≤ 1 but |C2 \ A| = 2 &gt; 1 and |C3 \ A| = 2 &gt; 1. The grade-1 upper
approximation is
aprR1 (A)
o
[n
=
[x]R |[x]R ∩ A| &gt; 1 = C1 ∪ C2 ,
because |C1 ∩ A| = 3 &gt; 1 and |C2 ∩ A| = 2 &gt; 1, while |C3 ∩ A| = 0 6&gt; 1.
Hence, the induced regions are:
POS(k=1) (A) = C1 = {p1 , p2 , p3 },
BND(k=1) (A) = C2 = {p4 , p5 , p6 , p7 },
NEG(k=1) (A) = C3 = {p8 , p9 }.
Interpretation: the symptom class C1 is “definitely flu” up to one exception (here, none), C2 is
“possibly flu” (mixed outcomes), and C3 is “definitely not flu”.
2.22 Linguistic Rough Set
Linguistic rough sets approximate fuzzy concepts using linguistic quantifiers and summaries,
producing lower/upper linguistic approximations that match reasoning under uncertainty [107].
Definition 2.22.1 (Linguistic approximation space (LAS)). Let U = {x1 , . . . , xm } be a nonempty
finite set of objects. Let L = {s0 , s1 , . . . , sg } be a finite totally ordered set of linguistic labels
(with s0 ≺ s1 ≺ · · · ≺ sg ), where s0 plays the role of the least label. Let C = {C1 , . . . , Cn } be
a family of linguistic concepts, where each Cj is a mapping
Cj : U → L.
The triple hU, C, Li is called a linguistic approximation space (LAS).


# Page. 48

![Page Image](https://bcdn.docswell.com/page/GE8D29L9ED.jpg)

47
Chapter 2. Types of Rough Set
Throughout, for V, W : U → L we use the pointwise lattice operations
(V ∧ W )(x) := min≺ {V (x), W (x)},
and for K ⊆ C (resp. L0 ⊆ C ) we write
^
(V ∨ W )(x) := max≺ {V (x), W (x)}
_
Cj ,
Cj ∈K
(x ∈ U ),
Cj
Cj ∈L0
for the iterated pointwise ∧ and ∨.
Definition 2.22.2 (Support and inclusion degree). For a concept V : U → L, define its support
by
supp(V ) := {x ∈ U | V (x)  s0 },
|V | := | supp(V )|.
For two concepts V, W : U → L with |V | &gt; 0, the (linguistic) inclusion degree of V in W is
D(W, V ) :=
{x ∈ supp(V ) | V (x)  W (x)}
|V |
∈ [0, 1].
Definition 2.22.3 (Degree of certainty for approximating a decision concept). Let hU, C, Li be
a LAS and let Y : U → L be a (linguistic) decision concept. Define




^
_
kL := DY,
Cj  ,
kU := D
Cj , Y  ,
k := min{kL , kU }.
Cj ∈C
Cj ∈C
Definition 2.22.4 (k -approximability and linguistic rough approximations). Let hU, C, Li be a
LAS, let Y : U → L, and fix k ∈ [0, 1]. Define two families of attribute-subsets


n
o
^
P (Y ) := K ⊆ C
DY,
Cj  ≥ k ,
Cj ∈K

n
Q(Y ) := L0 ⊆ C
D

_
o
Cj , Y  ≥ k .
Cj ∈L0
We say that Y is k -approximable (in hU, C, Li) if P (Y ) 6= ∅ and Q(Y ) 6= ∅.
Assume Y is k -approximable. Choose K ∗ ∈ P (Y ) and L∗ ∈ Q(Y ) such that
 _

 ^

supp
Cj \ supp
Cj
Cj ∈L∗
Cj ∈K ∗
is minimized among all pairs (K, L0 ) ∈ P (Y ) × Q(Y ). Define the linguistic rough lower and
linguistic rough upper approximations of Y by
^
_
k
Y k :=
Cj ,
Y :=
Cj .
Cj ∈K ∗
Cj ∈L∗
The resulting linguistic rough set (LRS) of Y (at level k ) is the pair
LRS k (Y ) := Y k , Y
k
.


# Page. 49

![Page Image](https://bcdn.docswell.com/page/LELM2WLM7R.jpg)

Chapter 2. Types of Rough Set
48
Example 2.22.5 (Hotel-review Linguistic Rough Set (real-life example)). Consider a small
hotel-recommendation task where review summaries are expressed by linguistic labels.
Objects (hotels). Let
U = {h1 , h2 , h3 , h4 }.
Linguistic label set. Let
L = {s0 , s1 , s2 , s3 , s4 },
s0 ≺ s1 ≺ s2 ≺ s3 ≺ s4 ,
interpreted as
s0 = very low, s1 = low, s2 = medium, s3 = high, s4 = very high.
Linguistic concepts (attributes). Let C = {C1 , C2 , C3 } where C1 = Cleanliness, C2 =
Service, C3 = Location, each Cj : U → L. Assume the following linguistic evaluations (e.g.,
obtained by aggregating text reviews):
C1 (Cleanliness)
C2 (Service)
C3 (Location)
h1 h2 h3 h4
s3 s2 s1 s3
s4 s3 s2 s2
s3 s2 s1 s4
Decision concept. Let Y : U → L denote the linguistic decision “Overall recommended”:
Y (h1 ) = s3 ,
Y (h2 ) = s2 ,
Y (h3 ) = s1 ,
Y (h4 ) = s3 .
Compute a k -level linguistic rough approximation. Using pointwise operations
(V ∧ W )(x) = min{V (x), W (x)},
≺
take
K ∗ = {C1 , C2 },
(V ∨ W )(x) = max{V (x), W (x)},
≺
L∗ = {C2 , C3 }.
Then the candidate lower/upper linguistic approximations are
^
_
k
Y k :=
Cj = min{C1 , C2 },
Y :=
Cj = max{C2 , C3 }.
Cj ∈K ∗
Concretely,
≺
≺
Cj ∈L∗
Y k (h1 ) = s3 , Y k (h2 ) = s2 , Y k (h3 ) = s1 , Y k (h4 ) = s2 ,
k
k
k
k
Y (h1 ) = s4 , Y (h2 ) = s3 , Y (h3 ) = s2 , Y (h4 ) = s4 .
Interpretation. The lower approximation Y k represents hotels that are definitely recommended
at level k based on conservative aggregation of key attributes (here, cleanliness and service), while
k
the upper approximation Y represents hotels that are possibly recommended at level k based
on optimistic aggregation (here, service or location).
Hence, the linguistic rough set of the decision concept Y (at level k ) is
k
LRS k (Y ) = Y k , Y .


# Page. 50

![Page Image](https://bcdn.docswell.com/page/4JMY89M6JW.jpg)

49
Chapter 2. Types of Rough Set
2.23 Weak Rough Set
A weak rough set is any pair of subsets (lower, upper) with lower contained in upper, representing
certainty and possibility [108, 109].
Definition 2.23.1 (Weak rough set). [109] Let U be a nonempty universe. A weak rough set
over U is an ordered pair
A = (AL , AU ),
AL , AU ⊆ U,
A L ⊆ AU ,
where AL is called the lower approximation (definite part) and AU is called the upper approximation (possible part).
Equivalently, a point x ∈ U is interpreted as
x ∈ AL ⇒ x definitely belongs to A,
x ∈ AU \ AL ⇒ x is undecidable for A,
x∈
/ AU ⇒ x definitely does not belong to A.
The positive, boundary, and negative regions of A are defined by
POS(A) := AL ,
BND(A) := AU \ AL ,
NEG(A) := U \ AU .
Definition 2.23.2 (Basic operations on weak rough sets). Let A = (AL , AU ) and B = (BL , BU )
be weak rough sets over U . Define
A ∪ B := (AL ∪ BL , AU ∪ BU ),
A ∩ B := (AL ∩ BL , AU ∩ BU ),
Ac := (U \ AU , U \ AL ),
and the (componentwise) inclusion order
A ⊆ B ⇐⇒ AL ⊆ BL and AU ⊆ BU .
These operations preserve the constraint “lower ⊆ upper”, hence are well-defined on weak rough
sets.
Remark 2.23.3 (Relationship to Pawlak rough sets). Given an approximation space (U, R)
(typically R is an equivalence relation) and any X ⊆ U , the Pawlak rough approximation pair
(R(X), R(X)) is a weak rough set. Thus, Pawlak rough sets are special cases of weak rough
sets in which the pair (AL , AU ) is induced by a specific information relation R (and hence is
constrained by that R).


# Page. 51

![Page Image](https://bcdn.docswell.com/page/PJR95GVN79.jpg)

Chapter 2. Types of Rough Set
50
Table 2.3: Concise comparison between Pawlak rough sets and weak rough sets.
Aspect
Pawlak rough set
Weak rough set
Primitive data
Approximation space (U, R) (usually
R an equivalence) plus target set X ⊆
U.
Only a pair (AL , AU ) of subsets of U
with AL ⊆ AU .
How (lower, upper) arise
Derived from R via R(X) = {x |
[x]R ⊆ X} and R(X) = {x | [x]R ∩
X 6= ∅}.
Chosen/estimated directly; no requirement that it comes from any indiscernibility/information relation.
Admissible pairs
Constrained by the granulation induced by R (e.g., unions of R-classes).
Any pair of subsets satisfying AL ⊆
AU (no granulation constraint).
Regions / semantics
Positive/boundary/negative regions
determined by (R(X), R(X)).
Same semantics using POS(A) = AL ,
BND(A) = AU \ AL , NEG(A) = U \
AU .
Set operations
Often studied via approximation operators induced by R.
Naturally closed under componentwise ∪, ∩ and complement (U \
AU , U \ AL ).
2.24 Decision-Theoretic Rough sets
Decision theoretic rough sets classify objects using Bayesian expected risk and loss, yielding
positive, boundary, negative regions with optimal actions [110–113]. Decision-theoretic rough sets
have also seen a large number of research publications in recent years [114–117]. Multigranulation
decision-theoretic rough sets [118–120] and multi-class decision-theoretic rough sets [121] are also
known as related concepts.
Definition 2.24.1 (Decision-Theoretic Rough Set (DTRS)). [110–113] Let U be a nonempty
finite universe and let E ⊆ U × U be an equivalence relation. For x ∈ U , write
[x]E := { y ∈ U | (x, y) ∈ E }
for the E -equivalence class (information granule) of x. Fix a target concept C ⊆ U and denote
its complement by C c := U \ C .
(1) Conditional probability. Define the conditional probability that granule [x]E belongs to
C by
p(x) := Prob(C | [x]E ) ∈ [0, 1].
In the standard finite and uniform setting, one often uses the empirical estimate
p(x) :=
|[x]E ∩ C|
.
|[x]E |
(2) Actions, states, and losses. Consider three actions
Act := {ActP , ActB , ActN },
interpreted as accept (ActP ), defer (ActB ), and reject (ActN ) the hypothesis “x ∈ C ”. Let the
set of states be Ω := {C, C c } and let
λ : Act × Ω → R≥0


# Page. 52

![Page Image](https://bcdn.docswell.com/page/PEXQKXZ8JX.jpg)

51
Chapter 2. Types of Rough Set
be a loss (cost) function. Write λiP := λ(Acti , C) and λiN := λ(Acti , C c ) for i ∈ {P, B, N }.
(3) Conditional risks (expected losses). For each x ∈ U and action Acti ∈ Act, define the
conditional risk

Risk(Acti | x) := λiP p(x) + λiN 1 − p(x) ,
i ∈ {P, B, N }.
(4) Minimum-risk decision rule and induced regions. Assume the standard cost ordering
λP P ≤ λBP &lt; λN P ,
λN N ≤ λBN &lt; λP N ,
which expresses that (i) accepting is cheapest when x ∈ C and rejecting is most costly then, and
(ii) rejecting is cheapest when x ∈
/ C and accepting is most costly then. Define the probability
thresholds
α :=
λP N − λBN
,
(λP N − λBN ) + (λBP − λP P )
β :=
λBN − λN N
.
(λBN − λN N ) + (λN P − λBP )
(Under the above ordering, one has 0 ≤ β &lt; α ≤ 1.)
Then the DTRS three-way decision regions for C are
POS(α,β) (C) := { x ∈ U | p(x) ≥ α },
NEG(α,β) (C) := { x ∈ U | p(x) ≤ β },

BND(α,β) (C) := U \ POS(α,β) (C) ∪ NEG(α,β) (C) .
Equivalently, the minimum-risk rule is:
p(x) ≥ α ⇒ choose ActP ,
p(x) ≤ β ⇒ choose ActN ,
β &lt; p(x) &lt; α ⇒ choose ActB .
(5) DTRS approximations. Define the decision-theoretic lower and upper approximations of
C by
apr(α,β) (C) := POS(α,β) (C),
apr(α,β) (C) := U \NEG(α,β) (C) = POS(α,β) (C)∪BND(α,β) (C).

The pair apr(α,β) (C), apr(α,β) (C) is called the decision-theoretic rough approximation of C
(with respect to E and λ).
Example 2.24.2 (Real-life DTRS: credit approval with three-way decisions). Consider a bank
that screens loan applicants. Let U be the set of applicants in the current month. Define an
equivalence relation E by information granules: two applicants are E -equivalent if they share
the same discretized profile (e.g., income-band, employment-type, credit-score-band, and debt-ratioband).
Let the target concept be
C := {applicants who will not default within 12 months},
For an applicant x ∈ U , estimate
p(x) = Prob(C | [x]E ) ≈
|[x]E ∩ C|
|[x]E |
C c := {default}.


# Page. 53

![Page Image](https://bcdn.docswell.com/page/3EK95W8LED.jpg)

Chapter 2. Types of Rough Set
52
from historical outcomes of past applicants in the same granule.
The bank uses three actions:
ActP = approve,
ActB = manual review / request more documents,
ActN = reject.
A simple loss model (in monetary units) is:
λP P = 0 (normal servicing cost),
λP N = 100 (expected loss if default after approval),
λBP = 5 (review cost if actually safe),
λBN = 10 (review cost even if risky),
λN P = 30 (opportunity cost of rejecting a safe applicant),
λN N = 0 (correct rejection).
Then the thresholds in Definition (DTRS) are
α=
λP N − λBN
100 − 10
90
=
=
≈ 0.947,
(λP N − λBN ) + (λBP − λP P )
(100 − 10) + (5 − 0)
95
β=
λBN − λN N
10 − 0
10
=
=
≈ 0.286.
(λBN − λN N ) + (λN P − λBP )
(10 − 0) + (30 − 5)
35
Hence the three-way decision rule becomes:
p(x) ≥ 0.947 ⇒ approve (ActP ),
p(x) ≤ 0.286 ⇒ reject (ActN ),
0.286 &lt; p(x) &lt; 0.947 ⇒ defer (ActB ).
Interpreting the regions,
POS(α,β) (C) = {high-confidence safe applicants},
NEG(α,β) (C) = {high-confidence risky applicants},
BND(α,β) (C) = {uncertain applicants sent to manual review}.
This is a typical operational setting where DTRS produces an approve / reject / review policy
from granulated empirical probabilities and asymmetric costs.
2.25 Type-n Rough Set
A Type-n rough set is an n-level hierarchical construction in which parameters are mapped
recursively to lower-level rough sets, ultimately terminating in the classical Pawlak lower–upper
approximations of subsets of the universe. Related layered frameworks include Type-n fuzzy
sets [122, 123], Type-n neutrosophic sets [124–126], and Type-n soft sets [127].
Definition 2.25.1 (Type-n rough set). Let PA = (U, E, ρ) be a parameterized approximation
space. Define recursively the collections Σ(n) (U, E, ρ) of type-n rough sets as follows.
(1) Σ(1) (U, E, ρ) is the collection of all type-1 rough sets RS(1) (X, B).


# Page. 54

![Page Image](https://bcdn.docswell.com/page/L73WK12675.jpg)

53
Chapter 2. Types of Rough Set
(2) For n ≥ 2, a type-n rough set (briefly, TnRS) over PA is a pair (F (n) , An ) where An ⊆ E
is a nonempty primary parameter set and

F (n) : An → Σ(n−1) (U, E, ρ),
a 7−→ F (n) (a) = Fa(n−1) , La ,
S
such that La ⊆ E is nonempty for every a ∈ An . The union a∈An La is called the underlying
parameter set of the type-n rough set.
Example 2.25.2 (Real-life Type-n rough set: multi-stage medical triage under layered uncertainty). Let U be a finite set of patients arriving at an emergency department. Let E be a finite
set of clinical parameters (features/tests), e.g.,
E = {AgeBand, SpO2 -Band, TempBand, CRP-Band, CT-Finding, ComorbidityScore, . . . }.
Assume ρ associates to each B ⊆ E an indiscernibility relation on U : patients x, y are ρ(B)equivalent if they share the same values on all parameters in B .
Fix a target concept X ⊆ U :
X := {patients who truly have a severe condition requiring admission}.
Type-1 (single-parameter-set) rough assessment. For a chosen parameter set B ⊆ E
(e.g. B = {SpO2 -Band, TempBand}), the type-1 rough set RS(1) (X, B) gives a certain-admit
lower region and a possible-admit upper region, based only on quick vitals.
Type-2 (primary parameter selects a protocol, then type-1 inside). Let the primary
parameter set be
A2 = {Protocol} ⊆ E,
where Protocol takes values such as Respiratory, Cardiac, Sepsis. Define F (2) : A2 → Σ(1) (U, E, ρ)
by mapping the single primary choice a = Protocol to a type-1 rough set whose secondary
parameter set depends on the protocol:

F (2) (Protocol) = RS(1) (X, LProtocol ), LProtocol ,
with, for instance,
LRespiratory = {SpO2 -Band, CT-Finding},
LSepsis = {TempBand, CRP-Band}.
Thus a type-2 rough set represents: “choose the clinical pathway, then approximate X using the
tests relevant to that pathway.”
Type-3 (add a further layer: resource level). Introduce a higher-level primary parameter
A3 = {ResourceLevel} ⊆ E,
with values such as Low (limited tests available) and High (full labs/imaging). Define F (3) :
A3 → Σ(2) (U, E, ρ) by

(2)
F (3) (ResourceLevel) = FResourceLevel , LResourceLevel ,
where each FResourceLevel is itself a type-2 assignment that changes which protocol-specific parameter sets LProtocol are admissible (e.g. under Low, exclude CT-Finding and rely on vitals/labs
only).
(2)
By iterating this construction, a Type-n rough set models layered real clinical decision-making:
policy/constraints (resources) → pathway selection → test selection → rough approximation of
“needs admission,” with each layer encoded by a primary parameter set and a map into the
previous rough-set type.


# Page. 55

![Page Image](https://bcdn.docswell.com/page/87DK3XZMJG.jpg)

Chapter 2. Types of Rough Set
54
2.26 Dominance-based Rough set
Dominance-based rough sets handle ordered criteria using dominance relations, approximating upward/downward decision-class unions with lower/upper sets consistent with preferences
principle [128–133]. Related concepts, such as fuzzy dominance-based rough sets, are also
known [134, 135].
Definition 2.26.1 (Decision table with preference-ordered criteria). [128–130] A (multiplecriteria) decision table is a tuple
S = (U, C ∪ D, V, f ),
where U is a nonempty finite set of objects, C is a finite F
set of condition attributes (assumed
to be criteria), D is a finite set of decision attributes, V = q∈C∪D Vq is the family of attribute
domains, and f : U × (C ∪ D) → V is the information function with f (x, q) ∈ Vq .
For each criterion q ∈ C , let q be a (weak) preference relation on U such that x q y means
“x is at least as good as y with respect to q ”. Assume that the decision attribute(s) induce a
partition
C = {Ct | t ∈ T },
T = {1, . . . , n},
and that the classes are preference-ordered (higher index t means a better class).
Definition 2.26.2 (Upward / downward unions). For each t ∈ {1, . . . , n}, define the upward
union and downward union of decision classes by
[
[
Ct≥ :=
Cs ,
Ct≤ :=
Cs .
s≥t
s≤t
Thus x ∈ Ct≥ means “x belongs to at least class Ct ”, and x ∈ Ct≤ means “x belongs to at most
class Ct ”.
Definition 2.26.3 (Dominance relation and dominance cones). Let P ⊆ C be a nonempty set
of criteria. The dominance relation induced by P is
x DP y
⇐⇒
(∀q ∈ P ) x q y.
For x ∈ U , define the P -dominating and P -dominated sets (dominance cones) by
DP+ (x) := { y ∈ U | y DP x },
DP− (x) := { y ∈ U | x DP y }.
Definition 2.26.4 (Dominance-based rough approximations (DRSA)). Fix P ⊆ C and consider
the family of upward and downward unions {Ct≥ , Ct≤ }nt=1 .
(1) Lower approximations. For t = 1, . . . , n, the P -lower approximation of Ct≥ and Ct≤ are


P (Ct≥ ) := x ∈ Ct≥ DP+ (x) ⊆ Ct≥ ,
P (Ct≤ ) := x ∈ Ct≤ DP− (x) ⊆ Ct≤ .


# Page. 56

![Page Image](https://bcdn.docswell.com/page/VJPK4PDQE8.jpg)

55
Chapter 2. Types of Rough Set
These are the objects that belong to the corresponding union without ambiguity under the dominance principle.
(2) Upper approximations (by complementarity). For t = 2, . . . , n and t = 1, . . . , n − 1,
respectively, define
≤
P (Ct≥ ) := U \ P (Ct−1
),
≥
P (Ct≤ ) := U \ P (Ct+1
).
(3) Boundary regions. The P -boundary (doubtful) regions are
BnP (Ct≥ ) := P (Ct≥ ) \ P (Ct≥ ),
BnP (Ct≤ ) := P (Ct≤ ) \ P (Ct≤ ).
Example 2.26.5 (Real-life DRSA: ranking loan applicants under monotone criteria). Let U be
a set of loan applicants. Consider an ordinal decision attribute with n = 3 ordered classes
C1 ≺ C 2 ≺ C 3 ,
where C1 = reject, C2 = manual review, and C3 = approve. Let C be a set of evaluation criteria
and choose a monotone subset
P = {Income, CreditScore, DTI} ⊆ C,
where higher Income and CreditScore are better, and lower DTI is better (so we transform it to
a benefit form, e.g. Affordability := −DTI).
Define the dominance relation induced by P by
x DP y
⇐⇒
g(x) ≥ g(y) for every benefit-type criterion g ∈ P.
Let the (forward/backward) dominance cones be
DP+ (x) := { y ∈ U | y DP x },
DP− (x) := { y ∈ U | x DP y }.
Form the upward and downward unions
[
[
Ct≥ :=
Cs ,
Ct≤ :=
Cs ,
s≥t
(t = 1, 2, 3).
s≤t
Interpretation via DRSA approximations.
• x ∈ P (C3≥ ) means: every applicant who dominates x (is at least as good on all P ) is still
in C3≥ = C3 , so x is certainly approvable under the monotonicity principle.
• x ∈ P (C2≥ ) means: everyone dominating x is at least in C2 , so x is certainly not rejectable
(i.e. belongs safely to review-or-approve).
• x ∈ BnP (C3≥ ) means: x is possibly approvable but not certain, so it falls naturally into
manual review due to ambiguous dominance evidence.
Thus DRSA implements a realistic credit policy: decisions respect monotone preferences (better
profiles should not receive worse decisions), while boundary regions identify applicants requiring
additional checks.


# Page. 57

![Page Image](https://bcdn.docswell.com/page/2EVVX21PEQ.jpg)

Chapter 2. Types of Rough Set
56
2.27 Triangular rough set
A triangular rough set represents vague assessments by a triplet (a, b, c), yielding piecewise-linear
membership and defuzzification via mean value directly [136]. Related concepts with similar
structure include triangular fuzzy sets [137–139] and triangular neutrosophic sets [140, 141].
Definition 2.27.1 (Triangular rough set). [136] Let X ⊆ R be a universe of discourse (e.g., a
rating scale). A triangular rough set on X is specified by a triple
A = (a, b, c) ∈ R3
with
a ≤ b ≤ c and [a, c] ∩ X 6= ∅,
together with the (triangular) membership function µA : X → [0, 1] defined by


0,
x &lt; a,




x−a


, a ≤ x ≤ b and a &lt; b,



 b−a
µA (x) := 1,
x = b,



c
−
x


, b ≤ x ≤ c and b &lt; c,


c−b




0,
x &gt; c,
with the usual endpoint conventions in the degenerate cases: if a = b then µA (a) = 1 and the
rising branch is omitted; if b = c then µA (c) = 1 and the falling branch is omitted. A common
crisp representative (defuzzification) value of A is the centroid
def(A) :=
a+b+c
.
3
Example 2.27.2 (Real-life example: customer satisfaction rating as a triangular rough set).
Let X = {1, 2, 3, 4, 5} ⊆ R be a 5-point customer-satisfaction (CSAT) scale. Suppose a product
manager summarizes the (vague) assessment “the satisfaction is around 4, but could be as low
as 3 and as high as 5” by the triangular rough set
A = (a, b, c) = (3, 4, 5).
Then the membership degrees on X are
µA (1) = 0,
µA (2) = 0,
µA (3) = 0,
µA (4) = 1,
µA (5) = 0.
The corresponding crisp representative (centroid) value is
def(A) =
3+4+5
= 4,
3
so the single-number summary is 4/5 while retaining the explicit uncertainty range [3, 5] encoded
by (a, b, c).
2.28 Game-theoretic rough sets
Game-theoretic rough sets treat approximation regions or probabilistic thresholds as players,
using payoff-based competition/cooperation to iteratively learn effective parameters automatically [62, 142–144]. Game-theoretic rough sets have also been widely studied in recent years,
partly due to their ease of application [145–148].


# Page. 58

![Page Image](https://bcdn.docswell.com/page/57GLVR2QEL.jpg)

57
Chapter 2. Types of Rough Set
Definition 2.28.1 (Decision-theoretic three-way rough approximations). [62, 142] Let U be a
finite universe, R ⊆ U × U an equivalence relation, and X ⊆ U a target concept. For x ∈ U ,
write [x]R for the R-equivalence class (granule) of x, and set

p(x) := Prob X | [x]R ∈ [0, 1].
Consider three classification actions
Act = {aP , aB , aN },
interpreted as deciding positive POS(X), boundary BND(X), and negative NEG(X), respectively. Let the set of states be Ω = {X, X c } and let λij ≥ 0 denote the loss incurred by taking
action ai ∈ Act when the true state is j ∈ Ω. Define the (conditional) risk of action ai at x by
Risk(ai | x) := λiX p(x) + λiX c (1 − p(x)).
Assume the standard ordering of losses (so that aP is best when x ∈ X , aN is best when x ∈
/ X,
and aB is an intermediate/defer action), e.g.
λP X ≤ λBX &lt; λN X ,
λN X c ≤ λBX c &lt; λP X c .
Then comparing Risk(aP | x) vs. Risk(aB | x) and Risk(aN | x) vs. Risk(aB | x) yields thresholds
α :=
λP X c − λBX c
,
(λP X c − λBX c ) + (λBX − λP X )
β :=
λBX c − λN X c
,
(λBX c − λN X c ) + (λN X − λBX )
with β &lt; α, and the three-way decision regions
POSα,β (X) = {x ∈ U : p(x) ≥ α},
NEGα,β (X) = {x ∈ U : p(x) ≤ β},
BNDα,β (X) = U \ (POSα,β (X) ∪ NEGα,β (X)).
The induced lower/upper approximations are
X α,β := POSα,β (X),
X α,β := POSα,β (X) ∪ BNDα,β (X) = U \ NEGα,β (X).
Definition 2.28.2 (Game-theoretic rough set (GTRS) model). Fix a decision-theoretic setting
as in Definition 2.28.1. A game-theoretic rough set model is specified by a (normal-form) game
G = (N, {Si }i∈N , {ui }i∈N ),
together with an interpretation map that turns each strategy profile into a three-way approximation of X .
(i) Players.) N is a finite set of players. Typical choices are:
• Parameter game: N = {α, β} (players represent the probabilistic thresholds);
• Measure game: N = {Acc, Prec} (players represent chosen approximation measures).


# Page. 59

![Page Image](https://bcdn.docswell.com/page/4EQY6VPWJP.jpg)

Chapter 2. Types of Rough Set
58
(ii) Strategies and induced approximations.)
For each i ∈ N , Si is a finite set of strategies
Q
(actions). Each profile s = (si )i∈N ∈ i∈N Si determines an updated loss table (equivalently,
updated risks), hence updated thresholds

(αs , βs ) and thus (POSs (X), BNDs (X), NEGs (X)) := POSαs ,βs (X), BNDαs ,βs (X), NEGαs ,βs (X) .
(Concretely, a strategy typically corresponds to increasing/decreasing selected losses or directly
increasing/decreasing α or β by a small step, then recomputing the regions.)
(iii) Payoffs.) Each payoff function
ui :
Y
Sj → R
j∈N
quantifies the utility of the resulting approximation for player i. A generic and mathematically
clean choice is:


ui (s) := mi POSs (X), BNDs (X), NEGs (X) − mi POSs(0) (X), BNDs(0) (X), NEGs(0) (X) ,
where mi is the player’s objective measure (e.g., boundary-size reduction, accuracy improvement,
precision gain), and s(0) is a fixed baseline profile (e.g., the current system configuration).
(iv) Equilibrium and the GTRS approximation.) A profile s∗ ∈
equilibrium if
ui (s∗ ) ≥ ui (si , s∗−i )
(∀ i ∈ N, ∀ si ∈ Si ),
i∈N Si is a (pure) Nash
Q
where s∗−i denotes the strategies of all players except i. The game-theoretic rough approximation
of X (under G ) is the three-way approximation induced by an equilibrium profile s∗ , namely
(X GTRS , X
GTRS

, BNDGTRS (X)) := POSs∗ (X), U \ NEGs∗ (X), BNDs∗ (X) .
Example 2.28.3 (Real-life GTRS: tuning an e-mail spam filter via a precision–recall game).
Let U be a finite set of e-mails arriving in one day. Let X ⊆ U be the (unknown) set of truly
spam e-mails. Assume that a baseline classifier assigns each x ∈ U a spam score p(x) ∈ [0, 1]
(estimated probability that x ∈ X ), and that actions are the three-way decisions
ActP = “auto-block”,
ActB = “quarantine / human review”,
ActN = “deliver”.
Given thresholds (α, β) with 0 ≤ β &lt; α ≤ 1, the induced three-way regions are
POSα,β (X) = {x ∈ U : p(x) ≥ α},
NEGα,β (X) = {x ∈ U : p(x) ≤ β},
BNDα,β (X) = U \ (POSα,β (X) ∪ NEGα,β (X)).
Players (a measure game). Let N = {Prec, Rec}, where Prec represents the product team
that wants to minimize false positives (maximize precision), and Rec represents the security team
that wants to catch as much spam as possible (maximize recall).


# Page. 60

![Page Image](https://bcdn.docswell.com/page/KJ4W4MY171.jpg)

59
Chapter 2. Types of Rough Set
Strategies. Fix small step sizes δα , δβ &gt; 0. Let each player choose a discrete adjustment:
SPrec = {↑ α, ↓ α},
SRec = {↑ β, ↓ β},
where, for a profile s = (sPrec , sRec ), the updated thresholds are
αs = α0 + ∆α (sPrec ),
βs = β0 + ∆β (sRec ),
with ∆α (↑ α) = +δα , ∆α (↓ α) = −δα , ∆β (↑ β) = +δβ , and ∆β (↓ β) = −δβ , projected to [0, 1]
and maintaining βs &lt; αs .
Payoffs (data-driven). Using a labeled validation subset Uval ⊆ U (obtained from user reports
and audits), define the realized precision and recall at (αs , βs ) by
Prec(s) :=
| POSs (X) ∩ X|
,
| POSs (X)| + ε
Rec(s) :=
| POSs (X) ∩ X|
,
|X| + ε
with a tiny ε &gt; 0 to avoid division by zero. Let the payoffs be improvements over a baseline
profile s(0) :
uPrec (s) := Prec(s) − Prec(s(0) ),
uRec (s) := Rec(s) − Rec(s(0) ).
Then G = (N, {Si }i∈N , {ui }i∈N ) together with the interpretation s 7→ (POSs (X), BNDs (X), NEGs (X))
is a concrete GTRS instance.
GTRS outcome. A (pure) Nash equilibrium s∗ is a stable operating point where neither
team can improve its objective by unilaterally changing the threshold it controls. The resulting
game-theoretic rough approximation is
X GTRS = POSs∗ (X),
X
GTRS
= U \ NEGs∗ (X),
BNDGTRS (X) = BNDs∗ (X),
interpreted as auto-block spam, possible spam, and safe e-mails, respectively.
2.29 Variable precision rough set
VPRS generalizes rough sets by allowing a controlled misclassification rate β , defining flexible
lower and upper approximations for concepts datasets [149–153]. Variable precision rough sets,
like other rough-set models, have also been extensively studied [154–157].
Definition 2.29.1 (Misclassification rate and β -inclusion). Let U be a nonempty finite universe
and let R ⊆ U × U be an equivalence relation. For x ∈ U , write
[x]R := { y ∈ U | (x, y) ∈ R }
for the R-equivalence class of x. For nonempty A ⊆ U and any X ⊆ U , define the relative
misclassification rate of A with respect to X by
err(A, X) :=
|A \ X|
|A ∩ X|
= 1−
.
|A|
|A|
Fix a precision parameter β ∈ [0, 12 ). We say that A is β -included in X , and write A ⊆β X , if
err(A, X) ≤ β
⇐⇒
|A ∩ X|
≥ 1 − β.
|A|


# Page. 61

![Page Image](https://bcdn.docswell.com/page/LE1Y48657G.jpg)

Chapter 2. Types of Rough Set
60
Definition 2.29.2 (Variable precision rough approximations (VPRS)). Let (U, R) be as in
Definition 2.29.1, let X ⊆ U , and fix β ∈ [0, 12 ). The β -lower approximation and β -upper
approximation of X are defined by
aprβ (X) := { x ∈ U | [x]R ⊆β X }
=
n
x∈U
o
|[x]R ∩ X|
≥1−β ,
|[x]R |
aprβ (X) :=
n
o
U \ aprβ (U \ X) = x ∈ U [x]R 6⊆β (U \ X) .
The induced positive, negative, and boundary regions are
POSβ (X) := aprβ (X),
NEGβ (X) := U \ aprβ (X) = aprβ (U \ X),
BNDβ (X) := aprβ (X) \ aprβ (X).
Remark 2.29.3. If β = 0, then A ⊆0 X is equivalent to A ⊆ X , hence apr0 (X) and apr0 (X)
reduce to the classical Pawlak lower and upper approximations based on R.
Example 2.29.4 (Variable precision rough approximations in credit screening). Let U =
{a1 , a2 , . . . , a10 } be a set of loan applicants. Assume applicants are indiscernible (for a coarse
first-stage screening) if they share the same income bracket and employment type. This induces
an equivalence relation R on U with equivalence classes
[a1 ]R = · · · = [a5 ]R =: C1 = {a1 , a2 , a3 , a4 , a5 },
[a6 ]R = [a7 ]R = [a8 ]R =: C2 = {a6 , a7 , a8 },
[a9 ]R = [a10 ]R =: C3 = {a9 , a10 }.
Let X ⊆ U be the set of applicants judged low-risk by a more accurate (but costly) manual
review:
X = {a1 , a2 , a3 , a4 , a9 }.
Fix the VPRS tolerance parameter β = 0.2 (so 1 − β = 0.8).
Step 1: β -lower approximation. For x ∈ U , the condition [x]R ⊆β X is equivalent to
|[x]R ∩X|
≥ 0.8. Compute the classwise ratios:
|[x]R |
|C1 ∩ X|
4
= = 0.8,
|C1 |
5
|C2 ∩ X|
0
= = 0,
|C2 |
3
|C3 ∩ X|
1
= = 0.5.
|C3 |
2
Hence only C1 satisfies the threshold, and thus
apr0.2 (X) = {x ∈ U | [x]R ⊆0.2 X} = C1 = {a1 , a2 , a3 , a4 , a5 }.


# Page. 62

![Page Image](https://bcdn.docswell.com/page/GEWGXZWWJ2.jpg)

61
Chapter 2. Types of Rough Set
Step 2: β -upper approximation via the complement. The complement is
U \ X = {a5 , a6 , a7 , a8 , a10 }.
Now
|C1 ∩ (U \ X)|
1
= = 0.2,
|C1 |
5
Thus
|C2 ∩ (U \ X)|
3
= = 1,
|C2 |
3
|C3 ∩ (U \ X)|
1
= = 0.5.
|C3 |
2
apr0.2 (U \ X) = C2 = {a6 , a7 , a8 },
and by Definition 2.29.2,
apr0.2 (X) = U \ apr0.2 (U \ X) = U \ C2 = {a1 , a2 , a3 , a4 , a5 , a9 , a10 }.
Regions. Therefore,
POS0.2 (X) = apr0.2 (X) = {a1 , a2 , a3 , a4 , a5 },
NEG0.2 (X) = apr0.2 (U \ X) = {a6 , a7 , a8 },
BND0.2 (X) = apr0.2 (X) \ apr0.2 (X) = {a9 , a10 }.
Interpretation. With β = 0.2, the class C1 is treated as “definitely low-risk” even though a5 ∈
/X
(allowing up to 20% misclassification inside a granule), while C2 is “definitely not low-risk”. The
mixed class C3 becomes the boundary (uncertain) region.
2.30 Multi-granulation rough set
Multi-granulation rough sets approximate a concept using multiple equivalence relations, thereby
yielding optimistic or pessimistic lower/upper regions across granules concurrently [158–162].
Multi-granulation rough sets, like other rough-set models, have also attracted a substantial body
of research in recent years [163–165].
Definition 2.30.1 (Multi-granulation rough approximations). Let U be a nonempty finite universe and let
R = {R1 , R2 , . . . , Rm }
be a finite family of equivalence relations on U (each Ri ⊆ U ×U ). For x ∈ U and i ∈ {1, . . . , m},
write
[x]Ri := { y ∈ U | (x, y) ∈ Ri }
for the Ri -equivalence class (granule) of x.
For any X ⊆ U , the optimistic multi-granulation lower and upper approximations of X (with
respect to R) are defined by
n
o
O
aprR
(X) := x ∈ U [x]R1 ⊆ X ∨ [x]R2 ⊆ X ∨ · · · ∨ [x]Rm ⊆ X ,


# Page. 63

![Page Image](https://bcdn.docswell.com/page/47ZL6152J3.jpg)

Chapter 2. Types of Rough Set
62
O
O
aprR
(X) := U \ aprR
(U \ X) =
n
o
x ∈ U [x]R1 ∩ X 6= ∅ ∧ · · · ∧ [x]Rm ∩ X 6= ∅ .
The optimistic multi-granulation rough set of X is the pair

O
O
aprR
(X), aprR
(X) .
Similarly, the pessimistic multi-granulation lower and upper approximations of X are
P
aprR
(X)
:=
n
o
x ∈ U [x]R1 ⊆ X ∧ [x]R2 ⊆ X ∧ · · · ∧ [x]Rm ⊆ X ,
P
aprR
(X)
P
:= U \ aprR
(U \ X)
=
n
o
x ∈ U [x]R1 ∩ X 6= ∅ ∨ · · · ∨ [x]Rm ∩ X 6= ∅ .
The pessimistic multi-granulation rough set of X is the pair

P
P
aprR
(X), aprR
(X) .
Remark 2.30.2. In both the optimistic and pessimistic cases one has
∗
∗
aprR
(X) ⊆ X ⊆ aprR
(X)
(∗ ∈ {O, P }),
so each pair forms a valid rough approximation of X . When m = 1, both models reduce to
Pawlak’s classical approximations for the single relation R1 .
Example 2.30.3 (Multi-granulation rough approximations in medical triage). Let
U = {p1 , p2 , p3 , p4 , p5 , p6 , p7 , p8 }
be a set of patients arriving at an emergency clinic. We model two different (coarse) ways of
grouping patients, hence two equivalence relations:
(i) Symptom-profile granulation. Let R1 be indiscernibility with respect to a symptom
profile (e.g., fever/cough category), giving the partition

U /R1 = C1 , C2 , C3 ,
C1 = {p1 , p2 , p3 }, C2 = {p4 , p5 }, C3 = {p6 , p7 , p8 }.
(ii) Rapid-test granulation. Let R2 be indiscernibility with respect to a binary rapid test
result (positive/negative), giving the partition

U /R2 = D1 , D2 ,
D1 = {p1 , p2 , p4 , p6 } (test+), D2 = {p3 , p5 , p7 , p8 } (test–).
Set R = {R1 , R2 }. Let the target concept be
X = {p1 , p2 , p4 , p5 } ⊆ U,


# Page. 64

![Page Image](https://bcdn.docswell.com/page/YJ6W2LV6JV.jpg)

63
Chapter 2. Types of Rough Set
interpreted as “patients judged high-risk (need isolation)” by a senior clinician.
Optimistic lower approximation. By Definition 2.30.1,
O
aprR
(X) = {x ∈ U | [x]R1 ⊆ X ∨ [x]R2 ⊆ X}.
Now C2 = {p4 , p5 } ⊆ X , while C1 * X , C3 * X , and neither D1 nor D2 is contained in X .
Hence only the patients in C2 enter the optimistic lower approximation:
O
aprR
(X) = {p4 , p5 }.
Optimistic upper approximation. Using the equivalent characterization in Definition 2.30.1,
O
aprR
(X) = {x ∈ U | [x]R1 ∩ X 6= ∅ ∧ [x]R2 ∩ X 6= ∅}.
For x ∈ C1 , we have C1 ∩ X = {p1 , p2 } 6= ∅, and both D1 ∩ X = {p1 , p2 , p4 } 6= ∅ and
O
D2 ∩ X = {p5 } 6= ∅ (depending on whether x ∈ D1 or x ∈ D2 ), so p1 , p2 , p3 ∈ aprR
(X).
O
O
Similarly p4 , p5 ∈ aprR (X), while C3 ∩ X = ∅, so p6 , p7 , p8 ∈
/ aprR (X). Therefore,
O
aprR
(X) = {p1 , p2 , p3 , p4 , p5 }.
Pessimistic lower approximation.
P
aprR
(X) = {x ∈ U | [x]R1 ⊆ X ∧ [x]R2 ⊆ X}.
Although [p4 ]R1 = [p5 ]R1 = C2 ⊆ X , we have [p4 ]R2 = D1 * X and [p5 ]R2 = D2 * X . Thus no
element satisfies both inclusions, and
P
aprR
(X) = ∅.
Pessimistic upper approximation. Equivalently,
P
aprR
(X) = {x ∈ U | [x]R1 ∩ X 6= ∅ ∨ [x]R2 ∩ X 6= ∅}.
Every patient belongs to either a symptom class intersecting X (namely C1 or C2 ) or a test class
intersecting X (both D1 and D2 intersect X ), so
P
aprR
(X) = U.
Interpretation. The optimistic model certifies “high-risk” if either symptom-granulation or testgranulation supports it (hence {p4 , p5 }), whereas the pessimistic model requires both granular
views to support it (hence the empty lower set here).


# Page. 65

![Page Image](https://bcdn.docswell.com/page/GJ5M21D5J4.jpg)

Chapter 2. Types of Rough Set
64
2.31 Soft Rough Set
A soft rough set combines soft-set parameterization with rough approximations, producing lower
and upper regions of a target concept under parameter-dependent uncertainty [166–169]. Related
notions include fuzzy soft rough sets [170–172], HyperSoft Rough Set [173, 174], Modified soft
rough set [175–177], N-soft rough sets [178, 179], and neutrosophic soft rough sets [180–182].
Definition 2.31.1 (Soft Rough Set). (cf. [183, 184]) Let U be a universal set, A a set of parameters, and P (U ) the power set of U . Let R be an equivalence relation on U , inducing a
partition U /R = {Y1 , Y2 , . . . , Ym } into equivalence classes. A soft set (F, A) on U is defined as
F : A → P (U ).
For B ⊆ U , the Soft Rough Lower Approximation L(B) and Soft Rough Upper Approximation
U (B) are given by:
L(B) = {u ∈ U | ∃e ∈ A such that F (e) ⊆ B},
U (B) = {u ∈ U | ∃e ∈ A such that F (e) ∩ B 6= ∅}.
The Soft Rough Set is represented as the pair:
(L(B), U (B)),
where L(B) and U (B) are the approximations of B with respect to the soft set.
Example 2.31.2 (Soft rough set for apartment shortlisting). Let U = {u1 , u2 , u3 , u4 , u5 , u6 }
be six apartments. Assume an equivalence relation R on U given by same neighborhood, so the
partition U /R = {Y1 , Y2 , Y3 } is
Y1 = {u1 , u2 },
Y2 = {u3 , u4 },
Y3 = {u5 , u6 }.
Let the parameter set be
A = {e1 , e2 , e3 },
where e1 = “affordable (monthly rent ≤ 120,000 JPY)”, e2 = “close to a station (walk ≤ 10
minutes)”, e3 = “quiet (low traffic / low nightlife)”. Define a soft set (F, A) on U by
F (e1 ) = {u1 , u3 , u5 },
F (e2 ) = {u1 , u2 , u4 },
F (e3 ) = {u2 , u5 , u6 }.
Suppose the target concept is
B = {u1 , u2 , u4 } ⊆ U,
interpreted as “apartments I would accept after a quick screening.”
The soft rough lower approximation is
L(B) = {u ∈ U | ∃e ∈ A such that F (e) ⊆ B}.


# Page. 66

![Page Image](https://bcdn.docswell.com/page/LE3WK1M5E5.jpg)

65
Chapter 2. Types of Rough Set
Here,
F (e1 ) = {u1 , u3 , u5 } * B,
F (e2 ) = {u1 , u2 , u4 } ⊆ B,
F (e3 ) = {u2 , u5 , u6 } * B,
so the witness parameter is e2 , and hence
L(B) = F (e2 ) = {u1 , u2 , u4 }.
Similarly, the soft rough upper approximation is
U (B) = {u ∈ U | ∃e ∈ A such that F (e) ∩ B 6= ∅}.
Since
F (e1 ) ∩ B = {u1 } 6= ∅,
F (e2 ) ∩ B = {u1 , u2 , u4 } 6= ∅,
F (e3 ) ∩ B = {u2 } 6= ∅,
every apartment that appears in at least one of F (e1 ), F (e2 ), F (e3 ) belongs to U (B), hence
U (B) = F (e1 ) ∪ F (e2 ) ∪ F (e3 ) = {u1 , u2 , u3 , u4 , u5 , u6 } = U.
Therefore the soft rough set (soft rough description) of B induced by (F, A) is


L(B), U (B) = {u1 , u2 , u4 }, U ,
meaning that u1 , u2 , u4 are definitely acceptable under the parameterized evidence, while the
remaining apartments are only possibly acceptable due to partial parameter overlap.
2.32 Soft Rough Expert Set
A soft expert set assigns to each parameter–expert–opinion triple a subset of the universe, modeling expert-dependent uncertainty in decision making [185–188]. Soft Rough Expert Set combines
soft parameters and expert opinions with rough approximations, yielding lower/upper evaluations under uncertainty in decision-making [189].
Definition 2.32.1 (Soft Rough Expert Set). [189] Let U be a nonempty universe, let E be a
set of parameters, let X be a set of experts, and let O = {0, 1} be a set of opinions. Set the soft
expert parameter space as
Z := E × X × O,
B ⊆ Z.
A soft expert set over U is a pair R = (J, B), where
J : B −→ P(U ).
(We call R a soft expert approximation space.)
For any target subset Y ⊆ U , define the soft-rough expert lower and upper approximations
induced by R by
[
aprR (Y ) := { u ∈ U | ∃ b ∈ B s.t. u ∈ J(b) and J(b) ⊆ Y } =
{ J(b) | b ∈ B, J(b) ⊆ Y },
[
aprR (Y ) := { u ∈ U | ∃ b ∈ B s.t. u ∈ J(b) and J(b)∩Y 6= ∅ } =
{ J(b) | b ∈ B, J(b)∩Y 6= ∅ }.
Then the ordered pair

aprR (Y ), aprR (Y )
is called the Soft Rough Expert Set of Y (with respect to R).


# Page. 67

![Page Image](https://bcdn.docswell.com/page/8EDK3X6Y7G.jpg)

Chapter 2. Types of Rough Set
66
Example 2.32.2 (Real-life Soft Rough Expert Set: shortlisting smartphones from expert opinions). Let the universe of objects be four smartphone models
U = {p1 , p2 , p3 , p4 }.
Let the parameter set and expert set be
E = {Battery, Camera, Price},
X = {Alice, Bob},
O = {0, 1}.
Interpret o = 1 as a positive expert opinion (recommended under that parameter) and o = 0 as
a negative opinion.
Define the soft expert parameter space Z = E × X × O and choose the active parameter set
B = {(Battery, Alice, 1), (Camera, Bob, 1), (Price, Alice, 1), (Battery, Bob, 0)} ⊆ Z.
Define a soft expert set R = (J, B) by specifying J : B → P(U ) as follows:
J(Battery, Alice, 1) = {p1 , p2 }
J(Camera, Bob, 1) = {p2 , p3 }
J(Price, Alice, 1) = {p1 , p3 }
J(Battery, Bob, 0) = {p3 }
(Alice recommends p1 , p2 for battery),
(Bob recommends p2 , p3 for camera),
(Alice recommends p1 , p3 for price),
(Bob flags p3 as bad on battery).
Let the target set be the “travel shortlist”
Y = {p2 , p3 } ⊆ U.
Then the soft-rough expert lower approximation is
[
aprR (Y ) = { J(b) | b ∈ B, J(b) ⊆ Y } = J(Camera, Bob, 1) ∪ J(Battery, Bob, 0) = {p2 , p3 }.
The soft-rough expert upper approximation is
[
aprR (Y ) = { J(b) | b ∈ B, J(b)∩Y 6= ∅ } = {p1 , p2 }∪{p2 , p3 }∪{p1 , p3 }∪{p3 } = {p1 , p2 , p3 }.
Hence the Soft Rough Expert Set of Y (with respect to R) is


aprR (Y ), aprR (Y ) = {p2 , p3 }, {p1 , p2 , p3 } ,
where p2 , p3 are definitely shortlisted under expert evidence, while p1 is only possibly shortlisted
due to overlapping positive recommendations.
2.33 Covering-based Rough Sets
Covering-based rough sets generalize Pawlak’s model by replacing partitions with coverings,
yielding neighborhood-based lower and upper approximations for uncertain data [190–194].
Definition 2.33.1 (Covering approximation space).
S Let U be a nonempty universe. A family
C ⊆ P(U ) is called a covering of U if ∅ ∈
/ C and C = U . The pair (U, C) is called a covering
approximation space.


# Page. 68

![Page Image](https://bcdn.docswell.com/page/V7PK4PY2J8.jpg)

67
Chapter 2. Types of Rough Set
Definition 2.33.2 (Dual covering approximation operators: tight and loose). Let (U, C) be a
covering approximation space and let X ⊆ U .
(1) Tight (strong) pair. Define the tight lower and tight upper approximations of X by
[
aprCt (X) :=
{ K ∈ C | K ⊆ X } = { x ∈ U | ∃K ∈ C (x ∈ K ⊆ X) },
aprCt (X) := U \ aprCt (U \ X) = { x ∈ U | ∀K ∈ C (x ∈ K ⇒ K ∩ X 6= ∅) }.
(2) Loose (weak) pair. Define the loose lower and loose upper approximations of X by
aprC` (X) := U \ aprC` (U \ X) = { x ∈ U | ∀K ∈ C (x ∈ K ⇒ K ⊆ X) },
aprC` (X) :=
[
{ K ∈ C | K ∩ X 6= ∅ } = { x ∈ U | ∃K ∈ C (x ∈ K, K ∩ X 6= ∅) }.
Both pairs are dual in the sense that each upper operator is the complement of the corresponding
lower operator applied to complements.
Definition 2.33.3 (Covering-based rough set). Let (U, C) be a covering approximation space
and X ⊆ U . A covering-based rough set (with respect to a chosen dual pair, e.g. the tight pair
or the loose pair) is represented by the approximation pair

apr(X), apr(X) ,


where (apr, apr) is either aprCt , aprCt or aprC` , aprC` . The set X is (covering-)definable if
apr(X) = apr(X); otherwise X is rough under the selected covering-based approximation.
Example 2.33.4 (Covering-based rough set for customer churn risk). Consider an e-commerce
service that groups customers into overlapping behavioral segments. Let
U = {c1 , c2 , c3 , c4 , c5 , c6 }
be the set of customers, and let the covering C consist of the following segments:
K1 = {c1 , c2 , c3 } (new users),
K2 = {c3 , c4 } (coupon users),
K3 = {c4 , c5 } (support-contacted),
K4 = {c5 } (payment-failure flagged),
K5 = {c4 , c6 } (inactive-week users),
so C = {K1 , K2 , K3 , K4 , K5 } is a covering of U .
Let the target set be the (unknown-but-assessed) set of customers who are truly at high churn
risk:
X = {c4 , c5 } ⊆ U.
Tight (strong) approximations. Since K3 ⊆ X and K4 ⊆ X , we obtain
[
aprCt (X) = {K ∈ C | K ⊆ X} = K3 ∪ K4 = {c4 , c5 }.


# Page. 69

![Page Image](https://bcdn.docswell.com/page/2JVVX2GXJQ.jpg)

Chapter 2. Types of Rough Set
68
Moreover, c6 ∈ K5 and K5 ∩ X = {c4 } 6= ∅, while every covering block containing c4 or c5
intersects X , hence
aprCt (X) = {c4 , c5 , c6 }.
Thus the tight boundary and negative regions are
BNDtC (X) = aprCt (X) \ aprCt (X) = {c6 },
U \ aprCt (X) = {c1 , c2 , c3 }.
Loose (weak) approximations. All blocks that intersect X are K2 , K3 , K4 , K5 , so
[
aprC` (X) = {K ∈ C | K ∩ X 6= ∅}
= K2 ∪ K3 ∪ K4 ∪ K5 = {c3 , c4 , c5 , c6 }.
For the loose lower approximation, we require that every covering block containing the customer
is contained in X . Customer c5 belongs only to K3 and K4 , and both satisfy K3 ⊆ X and
K4 ⊆ X , hence
aprC` (X) = {c5 }.
Therefore,
BND`C (X) = aprC` (X) \ aprC` (X) = {c3 , c4 , c6 },
U \ aprC` (X) = {c1 , c2 }.
Interpretation. Under segment information C , c5 is certainly high-risk even in the loose sense,
while c6 is only possibly high-risk because it co-occurs with a risky customer in an overlapping
segment.
2.34 Local Rough Set
A local rough set approximates a target concept using only granules generated by objects inside
it, via α/β thresholds efficiently [195–198].
Definition 2.34.1 (Local rough set (LRS)). [195, 196] Let (U, R) be an approximation space,
where U is a nonempty finite universe and R is an equivalence relation on U . For each x ∈ U ,
write
[x]R := { y ∈ U | (x, y) ∈ R }
for the R-equivalence class (information granule) of x. Let
D : P(U ) × P(U ) −→ [0, 1]
be an inclusion degree (a conditional/inclusion measure), e.g.
D(A, B) :=
|A ∩ B|
|B|
(B 6= ∅).
Fix thresholds 0 ≤ β &lt; α ≤ 1. For any target concept X ⊆ U , define the local α-lower and
local β -upper approximations by
aprαLRS (X) := { x ∈ X | D(X, [x]R ) ≥ α },
aprβLRS (X) := { x ∈ U | D(X, [x]R ) &gt; β }


# Page. 70

![Page Image](https://bcdn.docswell.com/page/5EGLVR8RJL.jpg)

69
Chapter 2. Types of Rough Set
=
The ordered pair
[
{ [x]R | x ∈ X, D(X, [x]R ) &gt; β }.
aprαLRS (X), aprβLRS (X)

is called the (α, β)-local rough set of X (with respect to R and D), and its local boundary region
is
BnLRS
α,β (X)
:= aprβLRS (X) \ aprαLRS (X).
Example 2.34.2 (Local rough set for credit-risk screening). Consider a bank that groups applicants into indiscernibility classes using a coarse credit profile (e.g., the same credit-score band
and income band). Let
U = {u1 , u2 , u3 , u4 , u5 , u6 , u7 , u8 }
be eight recent applicants. Define an equivalence relation R on U by declaring two applicants
equivalent if they fall into the same coarse profile class. Suppose the R-classes are
[u1 ]R = [u2 ]R = [u3 ]R = [u4 ]R = {u1 , u2 , u3 , u4 },
[u5 ]R = [u6 ]R = {u5 , u6 },
[u7 ]R = [u8 ]R = {u7 , u8 }.
Let the target concept X ⊆ U be the set of applicants who defaulted within 12 months in
historical data:
X = {u1 , u2 , u3 , u5 }.
Use the inclusion degree
|A ∩ B|
|B|
and choose thresholds α = 0.75 and β = 0.25.
D(A, B) :=
(B 6= ∅),
Step 1: local inclusion degrees. Compute D(X, [x]R ) = |X ∩ [x]R |/|[x]R | for each class:

3
D X, {u1 , u2 , u3 , u4 } = = 0.75,
4

1
D X, {u5 , u6 } = = 0.50,
2

0
D X, {u7 , u8 } = = 0.
2
Step 2: LRS approximations. The local α-lower approximation keeps only defaulted applicants whose whole profile-class is α-dominated by defaults:
aprαLRS (X)
= {x ∈ X | D(X, [x]R ) ≥ 0.75} = {u1 , u2 , u3 }.
The local β -upper approximation includes any applicant whose profile-class has default-rate &gt; β :
aprβLRS (X)
= {x ∈ U | D(X, [x]R ) &gt; 0.25} = {u1 , u2 , u3 , u4 , u5 , u6 }.
Hence the local boundary region (ambiguous-risk region) is
BnLRS
α,β (X)
= aprβLRS (X) \ aprαLRS (X) = {u4 , u5 , u6 }.
Interpretation. Applicants u1 , u2 , u3 are certainly high-risk under this granulation (their class
default-rate is 0.75), while u4 , u5 , u6 are possibly high-risk (their classes exceed β ) but not certain
under α, so they belong to the local boundary and may be sent to manual review.


# Page. 71

![Page Image](https://bcdn.docswell.com/page/4JQY6V9Y7P.jpg)

Chapter 2. Types of Rough Set
70
2.35 Interval-valued Rough Set
An interval-valued rough set assigns each element an interval [l(x), u(x)] representing possible
membership, derived from lower/upper approximations in data analysis [199–202].
Definition 2.35.1 (Interval domain). Let D be a linearly ordered set (typically D ⊆ R). Define
the set of (closed) intervals over D by
I(D) := { [`, u] ⊆ D | ` ≤ u }.
For I = [`, u], J = [`0 , u0 ] ∈ I(D), we write I ∩ J 6= ∅ iff max(`, `0 ) ≤ min(u, u0 ).
Definition 2.35.2 (Interval-valued information system). An interval-valued information system
is a quadruple
S = (U, A, {Da }a∈A , f ),
where U is a nonempty finite universe of objects, A is a nonempty finite set of attributes, each
attribute a ∈ A has an ordered domain Da , and
[
f :U ×A→
I(Da )
a∈A
satisfies f (x, a) ∈ I(Da ) for all (x, a) ∈ U × A. We write f (x, a) = [f − (x, a), f + (x, a)] with
f − (x, a) ≤ f + (x, a).
Definition 2.35.3 (Interval-tolerance (interval indiscernibility) relation). Fix a nonempty attribute subset B ⊆ A. Define a binary relation ∼B on U by
x ∼B y
⇐⇒
∀a ∈ B, f (x, a) ∩ f (y, a) 6= ∅.
For each x ∈ U , the B -neighborhood (tolerance class) of x is
[x]B := { y ∈ U | x ∼B y }.
Note that ∼B is always reflexive and symmetric; it need not be transitive.
Definition 2.35.4 (Interval-valued rough approximations). Let X ⊆ U be a (crisp) target set
and let B ⊆ A be nonempty. The B -lower and B -upper approximations of X (induced by the
interval-tolerance neighborhoods) are
B(X) := { x ∈ U | [x]B ⊆ X },
B(X) := { x ∈ U | [x]B ∩ X 6= ∅ }.
Definition 2.35.5 (Interval-valued rough set). The interval-valued rough set of X w.r.t. B is
the pair

RSB (X) := B(X), B(X) .
Its positive, boundary, and negative regions are
POSB (X) := B(X),
BNDB (X) := B(X) \ B(X),
NEGB (X) := U \ B(X).
If ∼B happens to be an equivalence relation (e.g., when intervals collapse to point values and
equality is used), this reduces to the classical Pawlak rough set model.


# Page. 72

![Page Image](https://bcdn.docswell.com/page/K74W4MXZE1.jpg)

71
Chapter 2. Types of Rough Set
Example 2.35.6 (Interval-valued rough set for hypertension screening). Consider a clinic where
each patient’s systolic blood pressure is recorded as an interval (to reflect device noise and shortterm variability). Let the universe be
U = {p1 , p2 , p3 , p4 , p5 }.
Take one interval-valued attribute B = {SBP}, and assign:
I(p1 ) = [118, 125],
I(p2 ) = [128, 138],
I(p4 ) = [148, 160],
I(p3 ) = [135, 150],
I(p5 ) = [155, 170].
Define an interval-based indiscernibility/tolerance relation ∼B on U by
pi ∼B pj ⇐⇒ I(pi ) ∩ I(pj ) 6= ∅,
and write the B -neighborhood of p as [p]B := {q ∈ U | q ∼B p}. Then:
[p1 ]B = {p1 },
[p2 ]B = {p2 , p3 },
[p3 ]B = {p2 , p3 , p4 },
[p4 ]B = {p3 , p4 , p5 },
[p5 ]B = {p4 , p5 }.
Let the target concept X ⊆ U be the set of patients flagged for antihypertensive evaluation:
X = {p3 , p4 , p5 }.
Using the standard neighborhood-based rough operators (compatible with the interval-valued
setting),
B(X) := {p ∈ U | [p]B ⊆ X},
B(X) := {p ∈ U | [p]B ∩ X 6= ∅},
we obtain
B(X) = {p4 , p5 },
B(X) = {p2 , p3 , p4 , p5 }.
Hence the interval-valued rough set RSB (X) = (B(X), B(X)) yields:
POSB (X) = {p4 , p5 },
BNDB (X) = {p2 , p3 },
NEGB (X) = {p1 }.
Interpretation: p4 , p5 are definitely high-risk under interval overlap uncertainty, p2 , p3 are borderline/ambiguous, and p1 is definitely not flagged.
2.36 Tolerance Rough Sets
Rough sets using tolerance (reflexive, symmetric) similarity relations; lower/upper approximations via tolerance neighborhoods, allowing overlapping classes, relaxed equality, analysis robust [25, 160, 203–205].
Definition 2.36.1 (Tolerance approximation space and tolerance rough set). Let U be a nonempty
finite universe and let T ⊆ U × U be a tolerance relation, i.e., T is reflexive and symmetric. For
each x ∈ U , define the T -neighborhood (tolerance class) of x by
T (x) := { y ∈ U | (x, y) ∈ T }.
Note that, unlike equivalence classes, tolerance classes may overlap, so an object can belong to
multiple tolerance classes.
For any X ⊆ U , define the tolerance (lower/upper) approximations of X by
T (X) := { x ∈ U | T (x) ⊆ X },
T (X) := { x ∈ U | T (x) ∩ X 6= ∅ }.

The pair T (X), T (X) is called the tolerance rough set of X (with respect to T ).


# Page. 73

![Page Image](https://bcdn.docswell.com/page/LJ1Y48NDEG.jpg)

Chapter 2. Types of Rough Set
72
Remark 2.36.2 (Block/covering viewpoint (optional)). A (maximal) tolerance block is a subset
B ⊆ U such that B × B ⊆ T and B is inclusion-maximal with this property; such blocks are
also called maximal consistent blocks. Let B(T ) be the family of all maximal tolerance blocks;
typically B(T ) forms a covering of U . Then one can also define (covering-style) approximations
by
[
[
aprB(T ) (X) := { B ∈ B(T ) | B ⊆ X },
aprB(T ) (X) := { B ∈ B(T ) | B ∩ X 6= ∅ }.
Example 2.36.3 (Tolerance rough set for churn-risk screening via “similar spending” groups).
A subscription service groups customers by tolerably similar monthly spending. Let
U = {c1 , c2 , c3 , c4 , c5 , c6 }
be six customers, and let the observed monthly spending (in normalized units) be
s(c1 ) = 2,
s(c2 ) = 3,
s(c3 ) = 3,
s(c4 ) = 4,
s(c5 ) = 5,
s(c6 ) = 5.
Define a tolerance relation T ⊆ U × U by
(x, y) ∈ T
⇐⇒
|s(x) − s(y)| ≤ 1.
Then T is reflexive and symmetric (hence a tolerance relation). The T -neighborhoods are
T (c1 ) = {c1 , c2 , c3 },
T (c2 ) = {c1 , c2 , c3 , c4 },
T (c4 ) = {c2 , c3 , c4 , c5 , c6 },
T (c3 ) = {c1 , c2 , c3 , c4 },
T (c5 ) = {c4 , c5 , c6 },
T (c6 ) = {c4 , c5 , c6 }.
(Notice the overlap, e.g., c4 ∈ T (c2 ) ∩ T (c5 ).)
Let the target set be the customers assessed as “high churn risk”:
X = {c4 , c5 , c6 } ⊆ U
(e.g., those with s(·) ≥ 4).
The tolerance lower approximation is
T (X) = {x ∈ U | T (x) ⊆ X} = {c5 , c6 },
since T (c5 ) = T (c6 ) = {c4 , c5 , c6 } ⊆ X , while T (c4 ) contains c2 , c3 ∈
/ X.
The tolerance upper approximation is
T (X) = {x ∈ U | T (x) ∩ X 6= ∅} = {c2 , c3 , c4 , c5 , c6 },
because T (c2 ) ∩ X 6= ∅ and T (c3 ) ∩ X 6= ∅ (both contain c4 ), whereas T (c1 ) ∩ X = ∅.
Hence the tolerance rough set of X (w.r.t. T ) is


T (X), T (X) = {c5 , c6 }, {c2 , c3 , c4 , c5 , c6 } ,
with regions
POST (X) = {c5 , c6 },
BNDT (X) = {c2 , c3 , c4 },
NEGT (X) = {c1 }.
Interpretation: c5 , c6 are definitely high-risk under tolerance grouping; c2 , c3 , c4 are possibly highrisk because their tolerance neighborhoods touch the high-risk set; c1 is definitely not high-risk
under the same tolerance criterion.


# Page. 74

![Page Image](https://bcdn.docswell.com/page/GJWGXZ4872.jpg)

73
Chapter 2. Types of Rough Set
2.37 One–directional s–Rough Set
An One–directional s–Rough Set approximates a function set using R-equivalence classes after
one-directional transfer expansion via F, producing lower/upper rough approximations as a pair
[206, 207].
Definition 2.37.1 (One-directional S -rough set). Let U be a nonempty finite universe and let
R ⊆ U × U be an equivalence relation. For x ∈ U , write
[x]R := { y ∈ U | (x, y) ∈ R }
for the R-equivalence class of x.
Let F be a nonempty family of element transfer functions (e.g., partial maps f : U * U ), and
fix f ∈ F . For any X ⊆ U , define the f -extension of X by
Xf := { u ∈ U \ X | f (u) ∈ X },
and define the associated one-directional S -set of X by
X ◦ := X ∪ Xf .
The one-directional S -lower approximation and one-directional S -upper approximation of X ◦
(with respect to (R, F) and the fixed f ) are defined by
[
(R, F)∗ (X ◦ ) := { [x]R | x ∈ U, [x]R ⊆ X ◦ } = { x ∈ U | [x]R ⊆ X ◦ },
(R, F)◦ (X ◦ ) :=
[
{ [x]R | x ∈ U, [x]R ∩ X ◦ 6= ∅ } = { x ∈ U | [x]R ∩ X ◦ 6= ∅ }.
The ordered pair
(R, F)∗ (X ◦ ), (R, F)◦ (X ◦ )

is called the one-directional S -rough set of X ◦ . Its boundary region is
BnR,F (X ◦ ) := (R, F)◦ (X ◦ ) \ (R, F)∗ (X ◦ ).
Example 2.37.2 (One-directional S -rough set in compliance training propagation). Let U =
{e1 , e2 , e3 , e4 , e5 , e6 } be six employees. Assume employees are indiscernible (for a coarse compliance audit) when they belong to the same job-family group, so the equivalence relation R
partitions U into
[e1 ]R = [e2 ]R = [e5 ]R = {e1 , e2 , e5 },
[e3 ]R = [e4 ]R = {e3 , e4 },
[e6 ]R = {e6 }.
Let X ⊆ U be the set of employees who have personally passed a security-training quiz:
X = {e1 , e3 }.


# Page. 75

![Page Image](https://bcdn.docswell.com/page/4EZL612973.jpg)

Chapter 2. Types of Rough Set
74
Let F be a family of reporting-line maps, and fix the partial transfer function f : U * U (“f (u)
is the direct manager of u”) given by
f (e2 ) = e1 ,
f (e4 ) = e3 ,
f (e5 ) = e2 ,
and f (e1 ), f (e3 ), f (e6 ) undefined.
Step 1: one-directional extension. By Definition 2.37.1, the f -extension is
Xf = { u ∈ U \ X | f (u) ∈ X } = {e2 , e4 },
because f (e2 ) = e1 ∈ X and f (e4 ) = e3 ∈ X , while f (e5 ) = e2 ∈
/ X . Hence the associated
one-directional S -set is
X ◦ = X ∪ Xf = {e1 , e2 , e3 , e4 }.
Step 2: Pawlak approximations of X ◦ under R. The one-directional S -lower approximation
is
(R, F)∗ (X ◦ ) = {x ∈ U | [x]R ⊆ X ◦ } = {e3 , e4 },
since [e3 ]R = [e4 ]R = {e3 , e4 } ⊆ X ◦ , while [e1 ]R = {e1 , e2 , e5 } * X ◦ and [e6 ]R = {e6 } * X ◦ .
The one-directional S -upper approximation is
(R, F)◦ (X ◦ ) = {x ∈ U | [x]R ∩ X ◦ 6= ∅} = {e1 , e2 , e3 , e4 , e5 },
because [e1 ]R = {e1 , e2 , e5 } intersects X ◦ (via e1 , e2 ), and [e3 ]R = {e3 , e4 } intersects X ◦ , while
[e6 ]R does not.
Therefore the one-directional S -rough set of X ◦ is


(R, F)∗ (X ◦ ), (R, F)◦ (X ◦ ) = {e3 , e4 }, {e1 , e2 , e3 , e4 , e5 } .
Its boundary region is
BnR,F (X ◦ ) = (R, F)◦ (X ◦ ) \ (R, F)∗ (X ◦ ) = {e1 , e2 , e5 }.
Interpretation. Passing the quiz can “propagate” one step along the manager map f , yielding
X ◦ . However, the coarse indiscernibility R (job-family grouping) prevents separating e5 from
e1 , e2 , so {e1 , e2 , e5 } becomes a boundary (uncertain) region.
2.38 Complex Rough Set
Complex rough sets represent membership uncertainty using complex-valued grades (magnitude
and phase) and compute lower/upper approximations via indiscernibility relations, enabling twodimensional uncertainty modeling in practice.


# Page. 76

![Page Image](https://bcdn.docswell.com/page/Y76W2LVD7V.jpg)

75
Chapter 2. Types of Rough Set
Definition 2.38.1 (Complex Rough Set (complex-coded Pawlak rough set)). Let U 6= ∅ be a
universe and let R ⊆ U × U be an equivalence relation. For A ⊆ U , write the Pawlak lower and
upper approximations as
R(A) := {x ∈ U | [x]R ⊆ A},
R(A) := {x ∈ U | [x]R ∩ A 6= ∅}.
Define the rough complex alphabet
DRS := {0, i, 1 + i} ⊂ C,
(i2 = −1).
The complex rough membership function of A is the map
µCA : U → DRS ,
µCA (x) := 1R(A) (x) + i 1R(A) (x),
where 1S denotes the indicator of a crisp set S ⊆ U . The pair
CR(A) := (U, µC
A)
is called the Complex Rough Set induced by A (under R). Equivalently, for x ∈ U :


 1 + i, x ∈ R(A) (definitely in),
C
µA (x) = i,
x ∈ R(A) \ R(A) (boundary),


0,
x ∈ U \ R(A) (definitely out).
Definition 2.38.2 (Rough equality). For A, B ⊆ U , write A ≡R B if
R(A) = R(B) and R(A) = R(B).
This is an equivalence relation on P(U ); its equivalence classes are the usual rough sets (induced
by R) represented by approximation pairs.
Example 2.38.3 (Complex rough set for transaction-fraud screening). A payment provider
groups transactions into “indiscernibility” blocks using coarse features (e.g., same card, same
merchant category, and same hour), because transactions in the same block are operationally
hard to distinguish at first glance.
Let
U = {t1 , t2 , t3 , t4 , t5 , t6 , t7 }
be seven transactions, and let R be the equivalence relation “belongs to the same block” with
equivalence classes
[t1 ]R = {t1 , t2 },
[t3 ]R = {t3 , t4 , t5 },
[t6 ]R = {t6 },
[t7 ]R = {t7 }.
Suppose the (currently) confirmed fraudulent transactions are
A = {t2 , t4 , t6 } ⊆ U.
Then the Pawlak lower and upper approximations are
R(A) = {x ∈ U | [x]R ⊆ A} = {t6 },
because only the singleton class [t6 ]R = {t6 } is fully contained in A, and
R(A) = {x ∈ U | [x]R ∩ A 6= ∅} = {t1 , t2 , t3 , t4 , t5 , t6 },


# Page. 77

![Page Image](https://bcdn.docswell.com/page/G75M21D874.jpg)

Chapter 2. Types of Rough Set
76
since [t1 ]R meets A (via t2 ) and [t3 ]R meets A (via t4 ), while [t7 ]R does not.
Hence the complex rough membership function (Definition 2.38.1) is
µCA (x) = 1R(A) (x) + i 1R(A) (x),
so explicitly
µCA (t6 ) = 1 + i,
µCA (tj ) = i (j ∈ {1, 2, 3, 4, 5}),
µCA (t7 ) = 0.
Interpretation: t6 is definitely fraudulent (its whole block is confirmed), t1 , t2 , t3 , t4 , t5 are boundary/possibly fraudulent (their blocks contain at least one fraud), and t7 is definitely non-fraudulent
under the current granulation. The resulting complex rough set is CR(A) = (U, µC
A ).
Theorem 2.38.4 (Well-definedness and faithfulness of the complex coding). Let (U, R) be an
approximation space with R an equivalence relation.
C
(i) For every A ⊆ U , the map µC
A is well-defined and satisfies µA (U ) ⊆ DRS .
(ii) The complex coding depends only on the rough set (approximation pair): if A ≡R B , then
µCA = µCB .
Hence the assignment
U
Φ : P(U )/≡R −→ DRS
,
Φ([A]) := µCA
is well-defined.
(iii) The encoding is faithful: for every A ⊆ U one can recover the approximations from µC
A via
R(A) = {x ∈ U | &lt;(µCA (x)) = 1},
R(A) = {x ∈ U | Im(µCA (x)) = 1}.
In particular, Φ is injective.
Proof. (i) For any A ⊆ U and x ∈ U , the indicators 1R(A) (x), 1R(A) (x) ∈ {0, 1}, so µC
A (x) =
a + ib with a, b ∈ {0, 1}. Moreover, R(A) ⊆ R(A) holds for Pawlak approximations, hence
a = 1 ⇒ b = 1. Therefore only 0, i, 1 + i occur and µCA : U → DRS is well-defined.
(ii) If A ≡R B , then R(A) = R(B) and R(A) = R(B). Thus, for every x ∈ U ,
µCA (x) = 1R(A) (x) + i 1R(A) (x) = 1R(B) (x) + i 1R(B) (x) = µCB (x),
C
so µC
A = µB and Φ is well-defined on the quotient.
C
(iii) By definition, &lt;(µC
A (x)) = 1R(A) (x) and Im(µA (x)) = 1R(A) (x) for all x ∈ U . Hence the
C
displayed reconstruction identities follow immediately. If Φ([A]) = Φ([B]), then µC
A = µB , so
their real and imaginary parts coincide pointwise, yielding R(A) = R(B) and R(A) = R(B),
i.e. A ≡R B . Therefore Φ is injective.


# Page. 78

![Page Image](https://bcdn.docswell.com/page/9J29418VER.jpg)

77
Chapter 2. Types of Rough Set
2.39 MetaRough Set
A MetaRough Set lifts rough approximations to meta-level families of rough objects via a metaindiscernibility, yielding iterated roughness within a MetaStructure hierarchy of arbitrary depth
[208].
Definition 2.39.1 ((Recall) Pawlak approximation space and rough objects). Let X be a
nonempty set and let R ⊆ X × X be an equivalence relation. For A ⊆ X , define the (Pawlak)
lower and upper approximations of A by
R(A) := { x ∈ X | [x]R ⊆ A },
R(A) := { x ∈ X | [x]R ∩ A 6= ∅ },
where [x]R := { y ∈ X | (x, y) ∈ R }. The set of rough objects over (X, R) is

Rough(X, R) := (R(A), R(A))
A⊆X .
Definition 2.39.2 (Meta-indiscernibility on rough objects). Fix a Pawlak space (X, R) and let
E be an equivalence relation on Rough(X, R). For r ∈ Rough(X, R), denote its E -equivalence
class by
[r]E := { s ∈ Rough(X, R) | s E r }.
Definition 2.39.3 (MetaRough approximations and MetaRough Set). Fix (X, R) and an equivalence relation E on Rough(X, R). For any family C ⊆ Rough(X, R), define the meta-lower
and meta-upper approximations (with respect to E ) by
C E := { r ∈ Rough(X, R) | [r]E ⊆ C },
C
The ordered pair
C E, C
E
:= { r ∈ Rough(X, R) | [r]E ∩ C 6= ∅ }.
E
is called the MetaRough Set of C (in Rough(X, R)) with respect to the meta-indiscernibility E .
Example 2.39.4 (MetaRough set in privacy-preserving risk-policy selection). Consider a hospital triage system that initially groups patients only by coarse observable profiles (e.g., age
bracket and two binary symptoms). Let
X = {x1 , x2 , x3 , x4 , x5 }
be five patients and let R be the induced indiscernibility relation with equivalence classes
E1 := {x1 , x2 },
E2 := {x3 , x4 },
E3 := {x5 }.
Thus (X, R) is a Pawlak approximation space (Definition 2.39.1). For any A ⊆ X , write its
rough object as

r(A) := R(A), R(A) ∈ Rough(X, R),
Bnd(r(A)) := R(A) \ R(A).
Step 1: Rough objects (examples). Let
A5 := {x5 },
A15 := {x1 , x5 }.


# Page. 79

![Page Image](https://bcdn.docswell.com/page/DEY4MZ6QJM.jpg)

Chapter 2. Types of Rough Set
78
Then
Bnd(r(A5 )) = ∅,

r(A5 ) = {x5 }, {x5 } ,
because A5 is a union of R-classes. In contrast,
R(A15 ) = {x5 },
R(A15 ) = E1 ∪ E3 = {x1 , x2 , x5 },
hence
Bnd(r(A15 )) = {x1 , x2 }.

r(A15 ) = {x5 }, {x1 , x2 , x5 } ,
Step 2: Meta-indiscernibility on rough objects. At the policy layer, suppose an auditing
rule is privacy-restricted and can see only how ambiguous a rough object is (i.e., the size of
its boundary region), but not which patients are in it. Define an equivalence relation E on
Rough(X, R) by
r E s ⇐⇒
Bnd(r) = Bnd(s) .
(Reflexivity/symmetry/transitivity are immediate because equality of integers is an equivalence
relation.)
Let
C0 :=

Bnd(r(A)) = ∅ .
r(A) ∈ Rough(X, R)
Since Bnd(r(A)) = ∅ holds exactly when A is a union of R-classes, we have the explicit list
n
C0 = (∅, ∅), (E1 , E1 ), (E2 , E2 ), (E3 , E3 ),
o
(E1 ∪E2 , E1 ∪E2 ), (E1 ∪E3 , E1 ∪E3 ), (E2 ∪E3 , E2 ∪E3 ), (X, X) .
Moreover, for every r ∈ C0 one has [r]E = C0 (the whole “boundary-0” class).
Step 3: A meta-level target family and MetaRough approximations. Suppose the
hospital wants to select definable rough objects that definitely classify x5 as high risk (e.g., x5
has a critical biomarker). Represent this policy target as the family
C := { (A, A) ∈ C0 | x5 ∈ A }
n
o
= (E3 , E3 ), (E1 ∪E3 , E1 ∪E3 ), (E2 ∪E3 , E2 ∪E3 ), (X, X) ⊆ Rough(X, R).
Compute the MetaRough approximations of C in Rough(X, R) with respect to E (Definition 2.39.3). If r ∈ C0 , then [r]E = C0 , so
[r]E ⊆ C fails (since C ( C0 ),
[r]E ∩ C = C 6= ∅.
Hence every r ∈ C0 lies in the meta-upper approximation but none lie in the meta-lower:
C E = ∅,
C
E
= C0 .
If r ∈
/ C0 (i.e., Bnd(r) 6= ∅), then | Bnd(r) | &gt; 0 and thus [r]E is disjoint from C (which consists
only of boundary-0 rough objects). Therefore
E
r∈
/C .
Interpretation. At the meta-level, because E forgets which patients appear and retains only
boundary size, all definable rough objects (boundary 0) become indistinguishable. Thus the
policy family C is only possibly recognized among definable rough objects: the MetaRough set
E
E
C E, C
equals (∅, C0 ), so the meta-boundary C \ C E is nonempty and equals C0 .


# Page. 80

![Page Image](https://bcdn.docswell.com/page/VJNYW3Z278.jpg)

79
Chapter 2. Types of Rough Set
Proposition 2.39.5 (Meta-level sandwich property). For every C ⊆ Rough(X, R) and every
equivalence relation E on Rough(X, R),
E
CE ⊆ C ⊆ C .
Proof. If r ∈ C E then [r]E ⊆ C , hence r ∈ [r]E ⊆ C and C E ⊆ C . If r ∈ C then [r]E ∩ C ⊇
E
E
{r} 6= ∅, hence r ∈ C and C ⊆ C .
2.40 T -valued Rough Set
T -valued rough sets replace membership with values in T ; lower and upper approximations are
computed via a T -valued relation and operators in decisions under uncertainty.
Definition 2.40.1 (T -valued (structure-valued) rough set). Let U 6= ∅ be a finite universe and
let R ⊆ U × U be an equivalence relation. For x ∈ U , write
[x]R := { y ∈ U | (x, y) ∈ R }.
Let (T, ≤T ) be a poset such that every nonempty finite subset of T admits a meet and a join
(equivalently, T is a lattice if one prefers a global assumption). A T -valued set on U is a mapping
A : U → T.
Define the T -valued lower and T -valued upper rough approximations of A by
^
aprRT (A)(x) :=
A(y),
y∈[x]R
_
aprRT (A)(x) :=
A(y)
(x ∈ U ),
y∈[x]R
where ∧ and ∨ are the meet and join in T . The pair

aprRT (A), aprRT (A)
is called the T -valued rough set (or structure-valued rough approximation) induced by A on
(U, R).
Example 2.40.2 (T -valued rough set: coarse loan screening with ordinal decision tags). Let
U = {u1 , u2 , u3 , u4 } be four loan applicants. Suppose the bank groups applicants by a coarse
profile (e.g., credit-score band × income band), yielding the equivalence classes
[u1 ]R = [u2 ]R = {u1 , u2 },
Let
[u3 ]R = [u4 ]R = {u3 , u4 }.
T = {Reject ≺ Review ≺ Approve}
be a finite chain (hence a lattice), where ∧ is the minimum and ∨ is the maximum. Define a
T -valued set A : U → T by the bank’s preliminary tag:
A(u1 ) = Review,
A(u2 ) = Approve,
A(u3 ) = Reject,
A(u4 ) = Review.
Then the T -valued lower/upper approximations (Definition 2.40.1) are
aprRT (A)(u1 ) = A(u1 ) ∧ A(u2 ) = Review,
aprRT (A)(u1 ) = A(u1 ) ∨ A(u2 ) = Approve,
and
aprRT (A)(u3 ) = A(u3 ) ∧ A(u4 ) = Reject,
aprRT (A)(u3 ) = A(u3 ) ∨ A(u4 ) = Review.
Interpretation: within an indiscernibility class, the lower tag is the most conservative (worst-case)
decision, while the upper tag is the most optimistic (best-case) decision.


# Page. 81

![Page Image](https://bcdn.docswell.com/page/YE9PX9ZDJ3.jpg)

Chapter 2. Types of Rough Set
80
Definition 2.40.3 (Vector-valued rough set). Let (L, ≤) be a lattice (e.g., L = [0, 1] with the
usual order) and fix p ≥ 1. Put T := Lp and equip T with the componentwise order:
u ≤T v
⇐⇒
uj ≤ vj (j = 1, . . . , p).
Then T is a lattice with componentwise meet/join:
(u ∧T v)j := uj ∧ vj ,
(u ∨T v)j := uj ∨ vj .
A vector-valued rough set on (U, R) is a T -valued rough set in the sense of Definition 2.40.1, i.e.,
a mapping A : U → Lp together with
aprRvec (A)(x) =
^
A(y),
aprRvec (A)(x) =
y∈[x]R
_
A(y),
y∈[x]R
computed componentwise in Lp .
Example 2.40.4 (Vector-valued rough set: multi-criteria risk profile under coarse indiscernibility). Let the universe and equivalence relation be as above: U = {u1 , u2 , u3 , u4 } with classes
{u1 , u2 } and {u3 , u4 }. Take L = [0, 1] and p = 3, and interpret
A(u) = (r(u), f (u), p(u)) ∈ [0, 1]3
as (default risk, fraud risk, profitability score). Define
A(u1 ) = (0.20, 0.70, 0.40),
A(u2 ) = (0.50, 0.60, 0.90).
Then, using componentwise meet/join (Definition 2.40.3),
aprRvec (A)(u1 ) = min{A(u1 ), A(u2 )} = (0.20, 0.60, 0.40),
aprRvec (A)(u1 ) = max{A(u1 ), A(u2 )} = (0.50, 0.70, 0.90),
where min / max are taken componentwise. Interpretation: under coarse grouping, the lower
vector gives a conservative envelope of scores available in the class, and the upper vector gives a
permissive envelope.
Definition 2.40.5 (Matrix-valued rough set). Let (L, ≤) be a lattice and fix integers m, n ≥ 1.
Put T := Matm×n (L) ∼
= Lm×n and define the entrywise order
A ≤T B
⇐⇒
Aij ≤ Bij (1 ≤ i ≤ m, 1 ≤ j ≤ n).
Then T is a lattice with entrywise meet/join:
(A ∧T B)ij := Aij ∧ Bij ,
(A ∨T B)ij := Aij ∨ Bij .
A matrix-valued rough set on (U, R) is a T -valued rough set, i.e., a mapping A : U → Matm×n (L)
and approximations
^
_
aprRmat (A)(x) =
A(y),
aprRmat (A)(x) =
A(y),
y∈[x]R
where ∧, ∨ are taken entrywise in Matm×n (L).
y∈[x]R


# Page. 82

![Page Image](https://bcdn.docswell.com/page/GE8D2915ED.jpg)

81
Chapter 2. Types of Rough Set
Example 2.40.6 (Matrix-valued rough set: scenario-by-horizon risk table aggregated within a
class). Let U = {u1 , u2 , u3 , u4 } and R be as above. Let L = [0, 1] and consider 2 × 2 matrices
whose entries represent risk scores for (short/long horizon)×(credit/liquidity scenario). Define
a matrix-valued set A : U → Mat2×2 ([0, 1]) by




0.3 0.6
0.7 0.2
A(u1 ) =
,
A(u2 ) =
.
0.5 0.4
0.4 0.8
Then for the class [u1 ]R = {u1 , u2 }, the entrywise meet/join (Definition 2.40.5) give

 

min(0.3, 0.7) min(0.6, 0.2)
0.3 0.2
mat
aprR (A)(u1 ) = A(u1 ) ∧T A(u2 ) =
=
,
min(0.5, 0.4) min(0.4, 0.8)
0.4 0.4


0.7 0.6
mat
aprR (A)(u1 ) = A(u1 ) ∨T A(u2 ) =
.
0.5 0.8
Interpretation: lower/upper matrix approximations provide conservative/optimistic bounds for
each scenario entry when applicants are indistinguishable at the coarse level.
Definition 2.40.7 (Tensor-valued rough set). Let (L, ≤) be a lattice and fix an order r ≥ 1
and finite index sets I1 , . . . , Ir . Put
T := LI1 ×···×Ir
(the set of r-way tensors with entries in L), ordered entrywise:
A ≤T B
⇐⇒
Ai1 ,...,ir ≤ Bi1 ,...,ir for all (i1 , . . . , ir ) ∈ I1 × · · · × Ir .
Then T is a lattice with entrywise meet/join. A tensor-valued rough set on (U, R) is a T -valued
rough set, i.e., a mapping A : U → T and approximations
^
_
aprRten (A)(x) =
A(y),
aprRten (A)(x) =
A(y),
y∈[x]R
computed entrywise in L
I1 ×···×Ir
y∈[x]R
.
Example 2.40.8 (Tensor-valued rough set: sensor×shift×failure-mode anomaly cube). Let
U = {m1 , m2 , m3 } be machines in a factory. Group machines by (model, firmware), giving the
equivalence classes
[m1 ]R = [m2 ]R = {m1 , m2 },
[m3 ]R = {m3 }.
Let L = [0, 1] and choose index sets
I1 = {s1 , s2 } (sensors),
I2 = {d, n} (day/night),
I3 = {f1 , f2 } (failure modes).
A tensor-valued set A : U → L
assigns to each machine a 2 × 2 × 2 cube of anomaly
probabilities. Define A(m1 ) and A(m2 ) by listing all entries:
I1 ×I2 ×I3
s1
s2
(d)
(n)
f1 f2 f1 f2
0.1 0.4 0.3 0.2
0.6 0.5 0.7 0.1
for A(m1 ),
s1
s2
(d)
(n)
f1 f2 f1 f2
0.2 0.3 0.8 0.4
0.4 0.6 0.2 0.9
for A(m2 ).
Then for the class [m1 ]R = {m1 , m2 }, the tensor-valued lower/upper approximations (Definition 2.40.7) are computed entrywise:
aprRten (A)(m1 ) = min{A(m1 ), A(m2 )},
aprRten (A)(m1 ) = max{A(m1 ), A(m2 )},
aprRten (A)(m1 )s1 ,d,f1 = min(0.1, 0.2) = 0.1,
aprRten (A)(m1 )s1 ,d,f1 = max(0.1, 0.2) = 0.2,
aprRten (A)(m1 )s2 ,n,f2 = min(0.1, 0.9) = 0.1,
aprRten (A)(m1 )s2 ,n,f2 = max(0.1, 0.9) = 0.9,
so, for example,
and similarly for all other indices (i1 , i2 , i3 ) ∈ I1 × I2 × I3 . Interpretation: the lower tensor
gives a conservative anomaly cube (guaranteed within the class), while the upper tensor gives
the maximal anomaly cube possible within the same coarse machine group.


# Page. 83

![Page Image](https://bcdn.docswell.com/page/LELM2WQ37R.jpg)

Chapter 2. Types of Rough Set
82
2.41 Refined Rough Set
A refined rough set partitions classical lower/upper regions into finer layers using multiple
thresholds or relations, improving granularity for analysis [209, 210]. Similar notions with an
analogous refinement-style structure include refined neutrosophic sets [211–214] and refined soft
sets [215, 216].
Definition 2.41.1 (Refined rough set (multi-granularity refinement)). Let U 6= ∅ be a finite
universe and let p ∈ N. A refinement chain on U is a finite family of equivalence relations
R = hR1 , R2 , . . . , Rp i such that R1 ⊆ R2 ⊆ · · · ⊆ Rp ⊆ U × U.
(Thus R1 is the finest granulation and Rp is the coarsest.) For x ∈ U , write
[x]Ri := { y ∈ U | (x, y) ∈ Ri }
for the Ri -equivalence class of x.
For any target set A ⊆ U and each i ∈ {1, . . . , p}, define the Pawlak lower/upper approximations
at level i by
apri (A) := { x ∈ U | [x]Ri ⊆ A },
apri (A) := { x ∈ U | [x]Ri ∩ A 6= ∅ }.
The refined rough set of A with respect to the refinement chain R is the p-tuple of rough
approximations
p
RR (A) := (apri (A), apri (A)) i=1 .
Example 2.41.2 (Refined rough set for fraud screening under multi-granularity customer grouping). Let U = {u1 , u2 , u3 , u4 , u5 , u6 } be six credit-card transactions. Let
A = {u1 , u2 } ⊆ U
be the (crisp) target set of transactions that are confirmed fraudulent after investigation.
We consider a refinement chain R = hR1 , R2 , R3 i capturing progressively coarser operational
granules used by a bank:
• R1 : “same card and same merchant” (finest granulation),
• R2 : “same card” (intermediate granulation),
• R3 : “same country of transaction” (coarsest granulation).
Assume the induced equivalence classes are as follows.
Level R1 (same card and merchant).
[u1 ]R1 = [u2 ]R1 = {u1 , u2 },
[u4 ]R1 = [u5 ]R1 = {u4 , u5 },
[u3 ]R1 = {u3 },
[u6 ]R1 = {u6 }.


# Page. 84

![Page Image](https://bcdn.docswell.com/page/4JMY891MJW.jpg)

83
Chapter 2. Types of Rough Set
Level R2 (same card).
[u1 ]R2 = [u2 ]R2 = [u3 ]R2 = {u1 , u2 , u3 },
[u4 ]R2 = [u5 ]R2 = {u4 , u5 },
[u6 ]R2 = {u6 }.
Level R3 (same country).
[u1 ]R3 = · · · = [u5 ]R3 = {u1 , u2 , u3 , u4 , u5 },
[u6 ]R3 = {u6 }.
It is immediate that R1 ⊆ R2 ⊆ R3 (each step merges some classes), hence R is a refinement
chain.
Now compute Pawlak approximations at each level.
Level i = 1. Since [u1 ]R1 = {u1 , u2 } ⊆ A, we have u1 , u2 ∈ apr1 (A). All other R1 -classes are
not contained in A, so
apr1 (A) = {u1 , u2 },
apr1 (A) = {u1 , u2 }.
Level i = 2. The class {u1 , u2 , u3 } intersects A but is not contained in A, so it contributes to
the upper but not the lower. Thus
apr2 (A) = {u1 , u2 },
apr2 (A) = {u1 , u2 , u3 }.
Level i = 3. The large class {u1 , u2 , u3 , u4 , u5 } intersects A but is not contained in A, hence
apr3 (A) = {u1 , u2 },
apr3 (A) = {u1 , u2 , u3 , u4 , u5 }.
Therefore, the refined rough set of A with respect to R is the 3-tuple


RR (A) = (apr1 (A), apr1 (A)), (apr2 (A), apr2 (A)), (apr3 (A), apr3 (A)) ,
i.e.,


RR (A) = ({u1 , u2 }, {u1 , u2 }), ({u1 , u2 }, {u1 , u2 , u3 }), ({u1 , u2 }, {u1 , u2 , u3 , u4 , u5 }) .
Interpretation. At finer granularity (R1 ), fraud is precisely isolated; at coarser granularities
(R2 , R3 ), the possible fraud region expands, reflecting weaker discriminatory power of the coarser
grouping.
Theorem 2.41.3 (Well-definedness and monotone refinement). In the setting of Definition 2.41.1,
for every A ⊆ U and each i ∈ {1, . . . , p}:


# Page. 85

![Page Image](https://bcdn.docswell.com/page/PJR95GLL79.jpg)

Chapter 2. Types of Rough Set
84
(i) apri (A) ⊆ apri (A).
(ii) (Monotonicity across refinement) If 1 ≤ i ≤ j ≤ p, then
apri (A) ⊇ aprj (A),
apri (A) ⊆ aprj (A).
Hence RR (A) is well-defined as a nested multi-level description of A.
Proof. (i) Fix i and take x ∈ apri (A). Then [x]Ri ⊆ A, so in particular [x]Ri ∩ A 6= ∅ (since
x ∈ [x]Ri and A 6= ∅ is not required; if A = ∅ then apri (A) = ∅ and the inclusion is trivial).
Thus x ∈ apri (A), proving apri (A) ⊆ apri (A).
(ii) Assume Ri ⊆ Rj with i ≤ j . For equivalence relations, Ri ⊆ Rj implies
[x]Ri ⊆ [x]Rj
(∀x ∈ U ),
because any y related to x under Ri is also related under Rj . Now, if x ∈ aprj (A), then
[x]Rj ⊆ A, hence [x]Ri ⊆ [x]Rj ⊆ A, so x ∈ apri (A). Therefore apri (A) ⊇ aprj (A).
Similarly, if x ∈ apri (A), then [x]Ri ∩ A 6= ∅. Since [x]Ri ⊆ [x]Rj , we also have [x]Rj ∩ A 6= ∅,
so x ∈ aprj (A). Hence apri (A) ⊆ aprj (A).
2.42 Rough cubic sets
Rough cubic sets approximate a cubic set via a cubic relation, producing lower N(A) and upper
H(A) cubic memberships; inequality indicates indefinability in uncertain analysis [217]. Concepts
with a similar structural flavor include cubic intuitionistic fuzzy sets [218–220] and neutrosophic
cubic sets [221–226].
Definition 2.42.1 (Rough cubic set (cubic rough set) induced by a cubic relation). Let X be
a nonempty universe. A cubic set in X is a mapping
e λi : X → [I] × I,
A = hA,
e : X → [I] is an interval-valued fuzzy set and λ : X → I is a fuzzy set. A cubic relation
where A
on X is a cubic set
e ri : X × X → [I] × I.
R = hR,
e c , rc i for the pointwise complement.
Write Rc = hR
e λi and x ∈ X , define the (P-system) lower and (P-system) upper rough
For a cubic set A = hA,
approximations by
D ^
 ^
E
e
e c (y, x) ,
AprR (A)(x) :=
A(y)
∨ R
λ(y) ∨ rc (y, x) ,
AprR (A)(x) :=
y∈X
y∈X
D _
 _
E
e
e y) ,
A(y)
∧ R(x,
λ(y) ∧ r(x, y) .
y∈X
The pair
RCSR (A) :=
y∈X

AprR (A), AprR (A)
is called the rough cubic set (or cubic rough set) of A induced by R. If AprR (A) = AprR (A),
then A is called definable (with respect to R); otherwise, A is called undefinable.


# Page. 86

![Page Image](https://bcdn.docswell.com/page/PEXQKQ66JX.jpg)

85
Chapter 2. Types of Rough Set
2.43 MOD Rough Set
MOD Rough sets use modular arithmetic on approximation operators, encoding membership
degrees as residues, enabling periodic uncertainty modeling while preserving lower and upper
bounds consistently.
Definition 2.43.1 (MOD semantic value (mod n)). Fix an integer n ≥ 2.
{none, pseudo, genuine} be totally ordered by
Let Tag :=
none &lt; pseudo &lt; genuine.
We write J for the join on Tag, i.e.
J(t1 , t2 ) := max{t1 , t2 },
_
ti := max{ti | i ∈ I}.
i∈I
A MOD semantic membership is a pair (µ, t) ∈ [0, 1] × Tag, where µ is the numeric membership
degree and t records the modular provenance tag.
Definition 2.43.2 (MOD rough approximations and MOD rough set). Let U 6= ∅ be a universe
and let R ⊆ U × U be an equivalence relation. For x ∈ U , write the R-granule
[x]R := { y ∈ U | (x, y) ∈ R }.
e be a MOD-described concept on U , given semantically by two functions
Let A
µAe : U → [0, 1],
tAe : U → Tag.
e by the semantic pairs
Define the MOD lower and MOD upper approximations of A

e := µ , t e ,
aprRMOD (A)
e A
A

e := µ e, t e ,
aprRMOD (A)
A
A
where, for each x ∈ U ,
µAe(x) := inf µAe(y),
y∈[x]R
µAe(x) := sup µAe(y),
y∈[x]R
and the tag parts are aggregated by join:
tAe(x) :=
_
tAe(y),
y∈[x]R
tAe(x) :=
_
tAe(y).
y∈[x]R
e is the approximation pair
The MOD rough set induced by (U, R) and A
e :=
MRSR (A)


e apr MOD (A)
e .
aprRMOD (A),
R


# Page. 87

![Page Image](https://bcdn.docswell.com/page/3EK9591GED.jpg)

Chapter 2. Types of Rough Set
86
Example 2.43.3 (MOD rough set in content moderation (grouped posts)). Let U = {p1 , p2 , p3 , p4 }
be four user posts to be moderated. Assume posts are indiscernible (for the platform’s first-pass
workflow) when they come from the same account in the same hour; this induces an equivalence
relation R with classes
[p1 ]R = [p2 ]R = {p1 , p2 },
[p3 ]R = [p4 ]R = {p3 , p4 }.
e =“potentially harmful content” by:
We model the MOD-described concept A
µAe : U → [0, 1] (risk score),
tAe : U → Tag (explanatory tags).
Let the tag-domain be the powerset lattice
Tag := P({violence, harassment, spam, scam}),
so that
W
∨ = ∪,
is set-union.
Assume the following MOD semantics are produced by an automatic screening system:
x
p1
p2
p3
p4
µAe(x)
0.90
0.60
0.20
0.40
tAe(x) {violence} {violence, harassment} {spam} {spam, scam}
By Definition 2.43.2, for each x ∈ U ,
µAe(x) = inf µAe(y),
y∈[x]R
and
tAe(x) =
_
tAe(y) =
y∈[x]R
[
y∈[x]R
µAe(x) = sup µAe(y),
y∈[x]R
tAe(y),
tAe(x) =
[
tAe(y).
y∈[x]R
Class {p1 , p2 }. For x ∈ {p1 , p2 },
µAe(x) = min{0.90, 0.60} = 0.60,
µAe(x) = max{0.90, 0.60} = 0.90,
tAe(x) = tAe(x) = {violence} ∪ {violence, harassment} = {violence, harassment}.
Class {p3 , p4 }. For x ∈ {p3 , p4 },
µAe(x) = min{0.20, 0.40} = 0.20,
µAe(x) = max{0.20, 0.40} = 0.40,
tAe(x) = tAe(x) = {spam} ∪ {spam, scam} = {spam, scam}.
Hence the MOD lower and MOD upper approximations are the semantic pairs


e = µ ,te ,
e = µ e, t e ,
aprRMOD (A)
aprRMOD (A)
e A
A A
A
and the induced MOD rough set is


e = apr MOD (A),
e apr MOD (A)
e .
MRSR (A)
R
R
Interpretation. Within each indiscernibility class, µ gives a conservative risk score (worst case), µ
gives a permissive score (best case), and the tag-join aggregates all plausible moderation reasons
observed in that class.


# Page. 88

![Page Image](https://bcdn.docswell.com/page/L73WKWQ575.jpg)

87
Chapter 2. Types of Rough Set
Theorem 2.43.4 (Well-definedness of MOD rough approximations). In Definition 2.43.2, the
mappings µAe, µAe : U → [0, 1] and tAe, tAe : U → Tag are well-defined. Moreover, they are
constant on R-equivalence classes; that is, if (x, x0 ) ∈ R, then
µAe(x) = µAe(x0 ),
µAe(x) = µAe(x0 ),
tAe(x) = tAe(x0 ),
tAe(x) = tAe(x0 ).
Proof. Fix x ∈ U . Since [x]R ⊆ U is a set and µAe maps into [0, 1], the set {µAe(y) | y ∈ [x]R } ⊆
[0, 1] admits inf and sup (because [0, 1] is complete). Hence µAe(x) and µAe(x) are well-defined
real numbers in [0, 1].
Likewise, {tAe(y) | y ∈ [x]R } ⊆ Tag is a subset of a finite totally ordered set, so its supremum
(equivalently, its maximum) exists and is unique. Thus tAe(x) and tAe(x) are well-defined.
Now assume (x, x0 ) ∈ R. Since R is an equivalence relation, we have [x]R = [x0 ]R . Therefore
the sets of values being aggregated coincide:
{µAe(y) | y ∈ [x]R } = {µAe(y) | y ∈ [x0 ]R },
{tAe(y) | y ∈ [x]R } = {tAe(y) | y ∈ [x0 ]R }.
W
Taking inf, sup, and on both sides yields the desired equalities. Hence the MOD rough approximations are constant on granules and, in particular, independent of the chosen representative
of an equivalence class. This proves well-definedness.
2.44 Topological Rough Sets
Topological rough sets approximate subsets by interior and closure in a topology, modeling
definite and possible membership under open-neighborhood uncertainty.
Definition 2.44.1 (Topological rough set (interior–closure approximation)). Let (X, τ ) be a
topological space, and let Y ⊆ X . Define the topological lower and topological upper approximations of Y by
τ
Y τ := intτ (Y ),
Y := clτ (Y ),
where intτ (Y ) and clτ (Y ) denote the interior and closure of Y in (X, τ ), respectively. Then
τ
Yτ ⊆Y ⊆Y .
τ
The pair Y τ , Y
is called the topological rough approximation of Y , and Y is said to be
τ
topologically rough if Y τ 6= Y .
Example 2.44.2 (GPS geofencing with uncertain location: a topological rough set). A delivery
app classifies whether a courier is inside a service zone Y (a downtown area), but GPS is noisy
and the system can only resolve location up to cell-tower regions.
Let X be a finite set of tower-cells covering the city, and take τ to be the topology generated by
these cells (so an open set is any union of cells). Identify each cell with its covered area.


# Page. 89

![Page Image](https://bcdn.docswell.com/page/87DK3K9YJG.jpg)

Chapter 2. Types of Rough Set
88
Let Y ⊆ X be the set of points (or fine-grained locations) that truly lie in the downtown zone.
Because the app observes only at the cell level, it can confidently say a location is in the zone
only when its entire observed cell lies in Y . Hence the definitely-in-zone set is
Y τ = intτ (Y ),
the union of all observed cells fully contained in the service zone.
Conversely, any observed cell that intersects Y might contain a downtown point, so the app must
treat all such cells as possibly in the zone. This yields the possibly-in-zone set
Y
τ
= clτ (Y ),
the smallest cell-union containing Y . If boundary cells intersect Y but are not contained in Y ,
τ
then Y τ 6= Y , so Y is topologically rough under cell-level uncertainty.
2.45 Preorder Rough Sets
Preorder rough sets approximate subsets using preorder up-sets and down-sets, yielding increasing/decreasing lower and upper regions for ordered uncertainty.
Definition 2.45.1 (Preorder rough set (increasing/decreasing approximations)). Let (X, ) be
a preordered set, i.e.,  is reflexive and transitive on X . For each x ∈ X , define the principal
up-set and down-set
N↑ (x) := { y ∈ X | x  y },
N↓ (x) := { y ∈ X | y  x }.
For Y ⊆ X , the increasing (upper-set) rough approximations are
Y ↑ := { x ∈ X | N↑ (x) ⊆ Y },
Y
↑
:= { x ∈ X | N↑ (x) ∩ Y 6= ∅ },
and the decreasing (lower-set) rough approximations are
Y ↓ := { x ∈ X | N↓ (x) ⊆ Y },
Y
↓
:= { x ∈ X | N↓ (x) ∩ Y 6= ∅ }.
Example 2.45.2 (Triage severity as a preorder: ICU-need under ordered uncertainty). Let
X = {1, 2, 3, 4, 5}
be emergency triage levels, where 1 is mild and 5 is critical. Use the natural preorder  given
by the usual order ≤:
x  y ⇐⇒ x ≤ y.
Hence the principal up-set and down-set are
N↑ (x) = {x, x+1, . . . , 5},
N↓ (x) = {1, 2, . . . , x}.
Let
Y = {4, 5}
represent the set of severity levels that require ICU-level care.


# Page. 90

![Page Image](https://bcdn.docswell.com/page/VJPK4KZ2E8.jpg)

89
Chapter 2. Types of Rough Set
Increasing (upper-set) approximations.
Y ↑ = {x ∈ X | N↑ (x) ⊆ Y } = {4, 5},
Y
↑
= {x ∈ X | N↑ (x) ∩ Y 6= ∅} = {3, 4, 5}.
Interpretation: levels 4 and 5 are definitely ICU-requiring because any escalation remains in Y ,
while level 3 is possibly ICU-requiring because escalation to 4 or 5 intersects Y .
Decreasing (lower-set) approximations (optional reading).
Y ↓ = {x ∈ X | N↓ (x) ⊆ Y } = ∅,
Y
↓
= {x ∈ X | N↓ (x) ∩ Y 6= ∅} = {4, 5}.
This reflects that ICU-need is naturally upward-oriented in the severity order.
2.46 Directed Rough Set
Directed rough sets approximate subsets using granules induced by a directed relation, capturing
asymmetric reachability/preference, yielding lower and upper approximations [227, 228].
Definition 2.46.1 (Up-directed approximation space). [227,228] A general approximation space
is a pair (U, R), where U 6= ∅ and R ⊆ U × U is a binary relation. The relation R is up-directed
if

(∀a, b ∈ U ) (∃c ∈ U ) aRc ∧ bRc .
If (U, R) satisfies this condition, we call it an up-directed approximation space.
Definition 2.46.2 (Neighborhoods). Let (U, R) be a general approximation space. For each
a ∈ U , define
[a] := {x ∈ U : xRa},
[a]i := {x ∈ U : aRx},
[a]o := {x ∈ U : aRx ∧ xRa}.
(These are the neighborhood, inverse-neighborhood, and symmetric neighborhood of a.)
Definition 2.46.3 (Directed rough approximations and directed rough set). [227, 228] Let
(U, R) be an up-directed approximation space and let A ⊆ U . Define the directed lower and
directed upper approximations of A by
[
[
A` :=
[a] : a ∈ U, [a] ⊆ A ,
Au :=
[a] : a ∈ U, [a] ∩ A 6= ∅ .
The ordered pair
RSR (A) := (A` , Au )
is called the directed rough set of A (with respect to R). Its directed boundary is
BnR (A) := Au \ A` .
We say that A is (directly) definable if A` = Au , and directly rough otherwise.


# Page. 91

![Page Image](https://bcdn.docswell.com/page/2EVVXVWXEQ.jpg)

Chapter 2. Types of Rough Set
90
Example 2.46.4 (Directed rough set for concept mastery in an adaptive math tutor). Let
U := {F, R, P, C}
be a set of concepts, where F = Fractions, R = Ratios, P = Percentages, and C = Conversions.
We interpret xRy as: “concept y can serve as an immediate covering/aggregation concept for
x” in a local concept-organization ontology.
Define an up-directed relation R ⊆ U × U by specifying the outgoing sets
Out(x) := { y ∈ U | xRy } :
Out(F ) = {F, R, P }, Out(R) = {R, P, C}, Out(P ) = {F, P, C}, Out(C) = {F, R, C}.
Then Out(x) ∩ Out(y) 6= ∅ for all x, y ∈ U ; hence (U, R) is up-directed.
For each a ∈ U , the directed granule (neighborhood) is
[a] := { x ∈ U | xRa }.
From the definition of R,
[F ] = {F, P, C},
[R] = {F, R, C},
[P ] = {F, R, P },
[C] = {R, P, C}.
Suppose a short quiz confirms competence in fractions, ratios, and percentages, so the target set
is
A := {F, R, P } ⊆ U.
The directed lower and directed upper approximations (Definition 2.46.3) are:
A` =
[
{ [a] | a ∈ U, [a] ⊆ A } = [P ] = {F, R, P },
because [P ] ⊆ A but [F ], [R], [C] * A. Moreover,
Au =
[
{ [a] | a ∈ U, [a] ∩ A 6= ∅ }
= [F ] ∪ [R] ∪ [P ] ∪ [C] = U.
Therefore the directed rough set of A is
RSR (A) = (A` , Au ) = ({F, R, P }, {F, R, P, C}),
and the directed boundary is
BnR (A) = Au \ A` = {C}.
Interpretation: the tutor can definitely attribute mastery to {F, R, P }, while C remains possibly
implicated because conversion skills are linked to (and hence “pulled in” by) the directed granules.


# Page. 92

![Page Image](https://bcdn.docswell.com/page/57GLVLDREL.jpg)

91
Chapter 2. Types of Rough Set
2.47 Strait Rough Set
A strait rough set approximates a target set by unions of partition blocks fully included in, or
intersecting, it [229].
Definition 2.47.1 (Strait rough set). [229] Let V be a nonempty universe. Let E be a nonempty
parameter set and let
F : E −→ P(V )
be
{F (e) | e ∈ E} is a partition of V (i.e., F (e) 6= ∅ for all e,
S a mapping such that the family
0
0
e∈E F (e) = V , and F (e) ∩ F (e ) = ∅ for e 6= e ). For any target set X ⊆ V , define:
(1) Strait lower approximation.
aprF (X) :=
[
F (e).
e∈E
F (e)⊆X
(2) Strait upper approximation.
aprF (X) :=
[
F (e).
e∈E
F (e)∩X6=∅
(3) Strait boundary region.
BnF (X) := aprF (X) \ aprF (X).
The ordered pair
SRF (X) :=

aprF (X), aprF (X)
is called the strait rough set of X (with respect to F ). We say that X is strait definable if
aprF (X) = aprF (X), and strait rough otherwise.
Example 2.47.2 (Real-life example: delivery coverage under district partition). Let V be the
set of households in a city. Let E be the set of administrative districts (wards), and for each
e ∈ E let
F (e) ⊆ V
be the set of households located in district e. Then {F (e) | e ∈ E} forms a partition of V (every
household lies in exactly one district).
Suppose a grocery company plans to offer same-day delivery. Let
X⊆V
be the set of households that are actually within reach of the service, based on distance-to-hub,
traffic, and staffing (so X is not known perfectly at the district level).


# Page. 93

![Page Image](https://bcdn.docswell.com/page/4EQY6Y1YJP.jpg)

Chapter 2. Types of Rough Set
92
The company’s policy is district-based: it can only announce coverage by whole districts. Hence
the strait approximations induced by F are:
[
[
aprF (X) =
F (e),
aprF (X) =
F (e).
e∈E
F (e)⊆X
e∈E
F (e)∩X6=∅
Here, aprF (X) is the set of households in districts fully reachable (safe to promise), while
aprF (X) is the set of households in districts that are partly reachable (possibly reachable).
The boundary
BnF (X) = aprF (X) \ aprF (X)
consists of households in mixed districts, where some addresses are reachable and others are
not; these are the districts for which the company must either refine the partition or accept
uncertainty.
2.48 Dialectical rough set
A dialectical rough set uses paraconsistent dialectical opposition to generate evolving lower/upper
approximations, tolerating contradictions among granules and objects explicitly [230].
Definition 2.48.1 (Dialectical rough set). [230] Let X = (S, R) be a Pawlak approximation
space, where S is a nonempty finite universe and R is an equivalence relation on S . For A ⊆ S ,
define the Pawlak lower and upper approximations
A` := { s ∈ S | [s]R ⊆ A },
Au := { s ∈ S | [s]R ∩ A 6= ∅ },
where [s]R := {t ∈ S | (s, t) ∈ R}.
Define the rough equality ≈ on P(S) by
A≈B
⇐⇒
A` = B ` and Au = B u ,
and let P(S) |≈:= P(S)/ ≈ denote the set of rough objects (equivalence classes under ≈).
Consider a concrete enriched pre-rough algebra (CERA)
W (X) = P(S) ∪ (P(S) |≈), ⊕, 0, . . . ,
where τ1 (resp. τ2 ) is the type predicate selecting the classical part P(S) (resp. the rough part
P(S) |≈), and where the term 0 ⊕ x (with τ1 (x)) represents the canonical embedding of a
classical object into the rough semantic domain.
The dialectical universe is defined by
K := {(x, 0 ⊕ x) | τ1 (x)} ∪ {(b, x) | τ2 (b) ∧ τ1 (x) ∧ x ⊕ 0 = b}.
A dialectical rough set is any element of K .
Example 2.48.2 (Real-life example: spam filtering with dialectical human–model interaction).
Let S be a finite set of recent email messages in an inbox. Define an equivalence relation R on
S by
m R m0 ⇐⇒ m and m0 share the same feature signature


# Page. 94

![Page Image](https://bcdn.docswell.com/page/KJ4W4WRZ71.jpg)

93
Chapter 2. Types of Rough Set
(sender-domain class, keyword bucket, and link-pattern type).
Thus each R-class [m]R is an information granule: messages that look indistinguishable at the
chosen feature resolution.
Let A ⊆ S be the (unknown) set of truly spam messages. Given limited features, the system
forms Pawlak approximations
A` = { m ∈ S | [m]R ⊆ A },
Au = { m ∈ S | [m]R ∩ A 6= ∅ }.
Hence A` are messages that are certainly spam at this granularity, while Au \ A` are borderline
messages whose R-classes contain both spam and non-spam.
Now a user provides feedback (“spam” / “not spam”) on a subset of messages, producing a
classical labeled set x ∈ P(S) (e.g., x is the set the user marked as spam today). The system
simultaneously maintains the corresponding rough object
b = [x]≈ ∈ P(S) |≈,
where ≈ is the rough-equality relation
x ≈ y ⇐⇒ x` = y ` and xu = y u .
Intuitively, b represents the model’s semantic view of “spam today” at the granularity induced
by R, where different user-labeled sets that induce the same (`, u) pair are identified.
In the CERA viewpoint, the dialectical universe
K = {(x, 0 ⊕ x) | τ1 (x)} ∪ {(b, x) | τ2 (b) ∧ τ1 (x) ∧ x ⊕ 0 = b}
encodes the two-way coupling between the user’s concrete labeling x and the system’s rough
semantic state b. A typical dialectical rough set instance is the pair
(x, 0 ⊕ x) ∈ K,
meaning: “the user’s explicit spam set x together with its embedded rough interpretation”.
Operationally, this supports iterative resolution of contradictions (user corrections) while keeping
a stable rough semantic layer that is invariant under ≈-equivalent relabelings.
2.49 Sheaf Rough Set
Sheaf Rough Set models uncertainty on sheaf sections: two sections are related when they agree
locally; neighborhood-based lower/upper approximations reflect gluing-consistent information
robustly across contexts.
Definition 2.49.1 (Sheaf Rough Set). Let (X, τ ) be a topological space and let F be a (setvalued) sheaf on X . Consider the universe of local sections
n
o
Sec(F) := (U, s)
U ∈ τ, U 6= ∅, s ∈ F (U ) .
Define a binary relation Rsh on Sec(F) by
(U, s) Rsh (V, t)
:⇐⇒
∃ W ∈ τ with ∅ 6= W ⊆ U ∩ V such that s|W = t|W .


# Page. 95

![Page Image](https://bcdn.docswell.com/page/LE1Y4YQD7G.jpg)

Chapter 2. Types of Rough Set
94
For any A ⊆ Sec(F), the sheaf lower and sheaf upper approximations of A are defined (via
neighborhoods) by
n
o
n
o
sh
A sh := x ∈ Sec(F)
Nsh (x) ⊆ A ,
A := x ∈ Sec(F)
Nsh (x) ∩ A 6= ∅ ,
where the sheaf neighborhood of x is
Nsh (x) := {y ∈ Sec(F) | x Rsh y}.
The pair A sh , A
sh 
is called the Sheaf Rough Set determined by A (with respect to F ).
Remark 2.49.2. The relation Rsh is reflexive and symmetric (a tolerance relation), but in
general need not be transitive. Hence the above is naturally interpreted as a tolerance-based
rough set induced by sheaf restrictions.
Proposition 2.49.3 (Well-definedness of the Sheaf Rough Set). In Definition 2.49.1, the relation
Rsh on Sec(F) is well-defined and is a tolerance relation (reflexive and symmetric). Moreover,
for every A ⊆ Sec(F), the sets A sh and A
sh
are well-defined subsets of Sec(F) and satisfy
sh
A sh ⊆ A ⊆ A .
Proof. (Well-definedness of Rsh ). If (U, s), (V, t) ∈ Sec(F) and W ⊆ U ∩ V is a nonempty open
set, then the restrictions s|W and t|W are defined by the restriction maps of the sheaf F (indeed,
a presheaf already suffices for restriction). Hence the statement “s|W = t|W ” is unambiguous,
and therefore Rsh is well-defined.
(Reflexive). For any (U, s) ∈ Sec(F), choose W := U ; then ∅ 6= W ⊆ U ∩ U and s|U = s|U .
Thus (U, s) Rsh (U, s).
(Symmetric). If (U, s) Rsh (V, t), then for some nonempty open W ⊆ U ∩ V we have s|W = t|W ,
which implies t|W = s|W ; hence (V, t) Rsh (U, s).
(Well-definedness of neighborhoods and approximations). For each x ∈ Sec(F), the neighborhood
Nsh (x) := {y ∈ Sec(F) | x Rsh y} is a subset of Sec(F) determined uniquely by Rsh , hence is
well-defined. Therefore A sh and A
Sec(F).
sh
are well-defined by set comprehension and are subsets of
(Inclusions). If x ∈ A sh , then Nsh (x) ⊆ A. By reflexivity, x ∈ Nsh (x), hence x ∈ A. Thus
sh
A sh ⊆ A. If x ∈ A, then again by reflexivity x ∈ Nsh (x), so Nsh (x) ∩ A 6= ∅, i.e. x ∈ A .
sh
Hence A ⊆ A .
Example 2.49.4 (Traffic status consistency across overlapping road regions). Let the road region
be the finite topological space
X = {N, C, S},


# Page. 96

![Page Image](https://bcdn.docswell.com/page/GEWGXGM8J2.jpg)

95
Chapter 2. Types of Rough Set
interpreted as North, Center, South, equipped with the topology
τ = {∅, {C}, {N, C}, {C, S}, {N, C, S}}.
Consider the (constant) set-valued sheaf F with fiber
T = {G, S, R},
where G means Green/Free flow, S means Slow, and R means Red/Stop. For each nonempty
open set U ∈ τ \ {∅}, define
F(U ) = T,
and for inclusions W ⊆ U take restriction maps to be the identity (so the label does not change
when restricting).
Then
Sec(F) = {(U, `) | U ∈ τ \ {∅}, ` ∈ T}.
By Definition 2.49.1, two local reports (U, `) and (V, `0 ) are related iff they agree on some
nonempty overlap, which here reduces to:

(U, `) Rsh (V, `0 ) ⇐⇒
U ∩ V contains a nonempty open set and ` = `0 .
Scenario. Suppose two cameras report slow traffic on the two corridor segments {N, C} and
{C, S}, but the central sensor {C} has not yet reported. Let the concept (set of “flagged” local
sections) be

A := ({N, C}, S), ({C, S}, S) ⊆ Sec(F).
Neighborhood computation. For x1 = ({N, C}, S), the Rsh -neighbors are exactly the slow
reports on open sets that overlap {N, C} in a nonempty open set:

Nsh (x1 ) = ({C}, S), ({N, C}, S), ({C, S}, S), (X, S) .
Similarly, for x2 = ({C, S}, S),

Nsh (x2 ) = ({C}, S), ({N, C}, S), ({C, S}, S), (X, S) .
Approximations. Since Nsh (xi ) * A (because ({C}, S) and (X, S) are not in A), neither x1
nor x2 belongs to the lower approximation; thus
A sh = ∅.
On the other hand, a section (U, S) lies in the upper approximation iff it overlaps one of the
corridor reports and has the same label S. Hence
A
sh

= ({C}, S), ({N, C}, S), ({C, S}, S), (X, S) .
Interpretation: the upper region identifies all locally consistent “slow” reports that can be glued
(via overlaps) to the observed corridor slowdowns, whereas the lower region is empty because
the missing central report prevents robust certainty.


# Page. 97

![Page Image](https://bcdn.docswell.com/page/47ZL6LQ9J3.jpg)

Chapter 2. Types of Rough Set
96
2.50 Simplicial Rough Set
Simplicial Rough Set uses a simplicial complex; vertices are indiscernible when they share identical facet-incidence signatures; Pawlak-style lower/upper approximations capture higher-order
interaction groups beyond graphs.
Definition 2.50.1 (Simplicial Rough Set). Let K be a finite simplicial complex with vertex set
V , and let M(K) denote the set of maximal simplices (facets) of K . For each vertex v ∈ V ,
define its facet-incidence signature by
ΣK (v) := { M ∈ M(K) | v ∈ M } ⊆ M(K).
Define an equivalence relation ∼K on V by
v ∼K w
:⇐⇒
ΣK (v) = ΣK (w).
For any X ⊆ V , define the simplicial lower and simplicial upper approximations by
X K := {v ∈ V | [v]∼K ⊆ X},
X
K
:= {v ∈ V | [v]∼K ∩ X 6= ∅},
where [v]∼K denotes the ∼K -equivalence class of v . Then X K , X
Rough Set of X with respect to K .
K
is called the Simplicial
Remark 2.50.2. This construction uses higher-order incidence (via facets), so it can distinguish vertices not only by pairwise adjacency (graph structure) but by membership in higherdimensional interaction groups.
Proposition 2.50.3 (Well-definedness of the Simplicial Rough Set). In Definition 2.50.1, the
relation ∼K on V is a well-defined equivalence relation. Consequently, for every X ⊆ V , the
K
sets X K and X are well-defined subsets of V and satisfy
K
XK ⊆ X ⊆ X .
Proof. (Well-definedness). Since K is a finite simplicial complex, the set of facets M(K) exists.
For each v ∈ V , the set ΣK (v) = {M ∈ M(K) | v ∈ M } is uniquely determined; thus ΣK (v)
is well-defined and so is the relation v ∼K w ⇐⇒ ΣK (v) = ΣK (w).
(Equivalence). Reflexivity and symmetry are immediate from equality. If v ∼K w and w ∼K u,
then ΣK (v) = ΣK (w) = ΣK (u), hence v ∼K u (transitivity). Thus ∼K is an equivalence
relation.
(Well-definedness of approximations). For each v ∈ V , the equivalence class [v]∼K is well-defined.
K
Therefore X K and X are well-defined subsets of V .
(Inclusions). If v ∈ X K then [v]∼K ⊆ X , and since v ∈ [v]∼K we get v ∈ X , so X K ⊆ X . If
K
K
v ∈ X , then [v]∼K ∩ X 6= ∅ because v ∈ [v]∼K ∩ X , hence v ∈ X , so X ⊆ X .


# Page. 98

![Page Image](https://bcdn.docswell.com/page/YJ6W2W6DJV.jpg)

97
Chapter 2. Types of Rough Set
Example 2.50.4 (Team structure in a company and indistinguishability by shared projects).
Let V = {A, B, C, D, E, F} be employees. Assume the maximal cross-functional projects (facets)
are
M1 = {A, B, C}, M2 = {C, D, E}, M3 = {A, B, F}.
Let K be the simplicial complex generated by these facets (i.e., it contains all subsets of each
Mi ).
For each employee v ∈ V , the facet-incidence signature is
ΣK (A) = {M1 , M3 }, ΣK (B) = {M1 , M3 }, ΣK (C) = {M1 , M2 },
ΣK (D) = {M2 }, ΣK (E) = {M2 }, ΣK (F) = {M3 }.
Thus the ∼K -equivalence classes are
[A]∼K = [B]∼K = {A, B},
[D]∼K = [E]∼K = {D, E},
[C]∼K = {C},
[F]∼K = {F}.
Scenario. Let X = {A, C, D} be the set of employees currently assigned to a security audit
task.
Approximations. By Definition 2.50.1,
X K = {v ∈ V | [v]∼K ⊆ X} = {C},
because [C]∼K = {C} ⊆ X , while [A]∼K = {A, B} * X and [D]∼K = {D, E} * X . Moreover,
X
K
= {v ∈ V | [v]∼K ∩ X 6= ∅} = {A, B, C, D, E}.
Interpretation: X K contains those certainly in X under the “same-project-signature” indiscerniK
bility, while X includes employees indistinguishable from at least one audited member (e.g., B
from A and E from D).
2.51 Persistent Rough Set
Persistent Rough Set tracks lower and upper approximations across a monotone scale parameter,
revealing how certainty and possibility regions evolve with changing neighborhoods or thresholds.
Definition 2.51.1 (Persistent Rough Set). Let U be a nonempty finite universe, and let
{Rε }ε∈R≥0 be a family of relations on U that is monotone in scale:
0 ≤ ε1 ≤ ε2
=⇒
Rε1 ⊆ Rε2 .
For each ε ≥ 0 and each x ∈ U , define the ε-neighborhood
Nε (x) := {y ∈ U | x Rε y}.
Given a target set X ⊆ U , define the ε-lower and ε-upper approximations by
X ε := {x ∈ U | Nε (x) ⊆ X},
The scale-indexed family
X ε := {x ∈ U | Nε (x) ∩ X 6= ∅}.

PR(X) := (X ε , X ε ) ε∈R
≥0
is called the Persistent Rough Set of X (with respect to {Rε }).


# Page. 99

![Page Image](https://bcdn.docswell.com/page/GJ5M2M68J4.jpg)

Chapter 2. Types of Rough Set
98
Remark 2.51.2. When Rε is induced by a metric (e.g., xRε y ⇐⇒ d(x, y) ≤ ε), PR(X)
captures how certainty/possibility regions vary with resolution (noise tolerance).
Proposition 2.51.3 (Well-definedness of the Persistent Rough Set). In Definition 2.51.1, for
every ε ≥ 0 and every X ⊆ U , the neighborhood Nε (x) and the sets X ε , X ε are well-defined.
Moreover, for each ε ≥ 0,
X ε ⊆ X ⊆ X ε.
If additionally each Rε is reflexive, then the above inclusions hold without further assumptions,
and the scale family PR(X) is well-defined as an indexed family of pairs of subsets of U .
Proof. Fix ε ≥ 0. Since Rε ⊆ U ×U is a relation, for each x ∈ U the set Nε (x) = {y ∈ U | xRε y}
is well-defined. Hence
X ε = {x ∈ U | Nε (x) ⊆ X},
X ε = {x ∈ U | Nε (x) ∩ X 6= ∅}
are well-defined subsets of U .
Assume Rε is reflexive. If x ∈ X ε , then Nε (x) ⊆ X and x ∈ Nε (x), so x ∈ X . Thus X ε ⊆ X .
If x ∈ X , then x ∈ Nε (x) ∩ X (again by reflexivity), so Nε (x) ∩ X 6= ∅ and hence x ∈ X ε .
Thus X ⊆ X ε .
Finally, PR(X) = {(X ε , X ε )}ε≥0 is well-defined as a family indexed by ε ∈ R≥0 , because each
component pair is uniquely determined by (U, Rε , X).
Example 2.51.4 (Customer segmentation under varying similarity thresholds). Let U = {c1 , c2 , c3 , c4 , c5 }
be customers described by a feature vector (purchase frequency, recency, spend, etc.). Assume
a dissimilarity d : U × U → R≥0 has been computed, and define for each ε ≥ 0 the relation
x Rε y
:⇐⇒
d(x, y) ≤ ε,
(which is reflexive since d(x, x) = 0). Suppose the nonzero distances are:
d(c1 , c2 ) = 0.3, d(c1 , c3 ) = 0.9, d(c2 , c3 ) = 0.8, d(c3 , c4 ) = 0.4, d(c4 , c5 ) = 0.6,
and all other distinct pairs have distance &gt; 0.9.
Scenario. Let X = {c1 , c2 } be customers flagged as high churn risk by a business rule.
At scale ε = 0.5. Neighborhoods are
N0.5 (c1 ) = {c1 , c2 },
N0.5 (c3 ) = {c3 , c4 },
N0.5 (c2 ) = {c1 , c2 },
N0.5 (c4 ) = {c3 , c4 },
N0.5 (c5 ) = {c5 }.
Hence
X 0.5 = {c1 , c2 },
X 0.5 = {c1 , c2 }.
At scale ε = 0.9. Neighborhoods expand to
N0.9 (c1 ) = {c1 , c2 , c3 },
N0.9 (c2 ) = {c1 , c2 , c3 },
N0.9 (c4 ) = {c3 , c4 , c5 },
N0.9 (c3 ) = {c1 , c2 , c3 , c4 },
N0.9 (c5 ) = {c4 , c5 }.
Thus
X 0.9 = ∅,
X 0.9 = {c1 , c2 , c3 }.
Interpretation: at fine resolution (ε = 0.5), churn-risk customers are robustly separated; at
coarser resolution (ε = 0.9), similarity neighborhoods blur and the “possible” region grows to
include c3 .


# Page. 100

![Page Image](https://bcdn.docswell.com/page/9E2949MV7R.jpg)

99
Chapter 2. Types of Rough Set
2.52 Causal Rough Set
Causal Rough Set forms indiscernibility using causally relevant attributes (e.g., Markov boundary), then computes Pawlak-style lower/upper approximations, emphasizing cause-driven granules over correlations in learning applications.
Definition 2.52.1 (Causal Rough Set). Let S = (U, A ∪ {d}) be a decision system, where U is
the set of objects, A is a set of conditional attributes, and d is a distinguished decision attribute.
Assume a causal model has been specified (e.g., a causal DAG) such that a causal boundary
(or causal feature set) C(d) ⊆ A for the decision attribute d is given (for example, a Markov
boundary of d). Define the causal indiscernibility relation ∼ca on U by
x ∼ca y
:⇐⇒
∀a ∈ C(d), a(x) = a(y).
For any concept X ⊆ U , define the causal lower and causal upper approximations by
ca(X) := {x ∈ U | [x]∼ca ⊆ X},
ca(X) := {x ∈ U | [x]∼ca ∩ X 6= ∅}.

The pair ca(X), ca(X) is called the Causal Rough Set of X induced by C(d).
Remark 2.52.2. The novelty is that the granules are formed only from (assumed) causally
relevant attributes for d, rather than from all available attributes.
Proposition 2.52.3 (Well-definedness of the Causal Rough Set). In Definition 2.52.1, the
relation ∼ca on U is a well-defined equivalence relation. Consequently, for each X ⊆ U , the sets
ca(X) and ca(X) are well-defined subsets of U and satisfy
ca(X) ⊆ X ⊆ ca(X).
Proof. (Well-definedness). Since each attribute a ∈ A is a function on U , the statement a(x) =
a(y) is unambiguous for any x, y ∈ U . Therefore the condition ∀a ∈ C(d), a(x) = a(y) defines
a well-defined relation ∼ca .
(Equivalence). Reflexivity: a(x) = a(x) for all a, hence x ∼ca x. Symmetry: if a(x) = a(y)
then a(y) = a(x), hence x ∼ca y ⇒ y ∼ca x. Transitivity: if a(x) = a(y) and a(y) = a(z) for
all a ∈ C(d), then a(x) = a(z) for all such a, so x ∼ca z .
(Approximations and inclusions). Since ∼ca is an equivalence relation, each class [x]∼ca is welldefined, and thus ca(X) and ca(X) are well-defined. If x ∈ ca(X) then [x]∼ca ⊆ X and
x ∈ [x]∼ca , hence x ∈ X . If x ∈ X then [x]∼ca ∩ X 6= ∅, hence x ∈ ca(X).
Example 2.52.4 (Clinical triage using a causally relevant feature set). Let U = {p1 , p2 , p3 , p4 }
be patients. Consider conditional attributes
A = {Smoke, Chol, Gene, Exercise}


# Page. 101

![Page Image](https://bcdn.docswell.com/page/D7Y4M4XQEM.jpg)

Chapter 2. Types of Rough Set
100
and a decision attribute d = HighRisk. Assume a causal analysis (e.g., domain knowledge /
causal discovery) yields the causal boundary
C(d) = {Smoke, Chol, Gene},
meaning Exercise is not used to form causal granules for d.
Suppose the patient table is:
p1
p2
p3
p4
Smoke Chol Gene Exercise HighRisk
1
H
1
L
1
1
H
1
H
1
0
H
1
L
1
0
L
0
H
0
By Definition 2.52.1,


pi ∼ca pj ⇐⇒ Smoke(pi ), Chol(pi ), Gene(pi ) = Smoke(pj ), Chol(pj ), Gene(pj ) .
Hence
[p1 ]∼ca = [p2 ]∼ca = {p1 , p2 },
[p3 ]∼ca = {p3 },
[p4 ]∼ca = {p4 }.
Scenario. Let X = {p1 , p3 } be the set of patients whom a clinician tentatively selects for
intensive intervention.
Approximations. Then
ca(X) = {p ∈ U | [p]∼ca ⊆ X} = {p3 },
because [p3 ]∼ca = {p3 } ⊆ X , while [p1 ]∼ca = {p1 , p2 } * X . Moreover,
ca(X) = {p ∈ U | [p]∼ca ∩ X 6= ∅} = {p1 , p2 , p3 }.
Interpretation: p2 becomes possibly in X because it is causally indistinguishable from p1 with
respect to {Smoke, Chol, Gene} (even though Exercise differs).
2.53 Entropy-Regularized Rough Set
Entropy-Regularized Rough Set refines the lower approximation by retaining elements from sufficiently pure, low-entropy equivalence blocks, reducing boundary noise while preserving classical
upper approximation structure.
Definition 2.53.1 (Entropy-Regularized Rough Set). Let U be a finite universe and let R be
an equivalence relation on U , inducing a partition U /R = {B1 , . . . , Bm } into equivalence classes
(blocks). For any X ⊆ U and any block B ∈ U /R, define the block proportion
pB (X) :=
|B ∩ X|
∈ [0, 1]
|B|


# Page. 102

![Page Image](https://bcdn.docswell.com/page/VENYWY12J8.jpg)

101
Chapter 2. Types of Rough Set
and the binary Shannon entropy of this proportion
HB (X) := −pB (X) log pB (X) − (1 − pB (X)) log(1 − pB (X)),
(0 log 0 := 0).
Fix parameters α ∈ (1/2, 1] (purity threshold) and θ ∈ [0, log 2] (entropy threshold). Define the
entropy-regularized lower approximation of X by
n
o
(α,θ)
X ent := x ∈ X
p[x]R (X) ≥ α and H[x]R (X) ≤ θ ,
and define the upper approximation as the classical Pawlak upper approximation
X ent := X := {x ∈ U | [x]R ∩ X 6= ∅}.
Then the pair

(α,θ)
X ent , X ent

is called an Entropy-Regularized Rough Set of X (with respect to R, α, θ).
Remark 2.53.2. By construction, X ent ⊆ X ⊆ X ent . The lower region keeps only those
x ∈ X whose R-block is both sufficiently pure (large p) and low-uncertainty (small entropy).
(α,θ)
Proposition 2.53.3 (Well-definedness of the Entropy-Regularized Rough Set). In Definition 2.53.1,
the quantities pB (X) and HB (X) are well-defined for every block B ∈ U /R and every X ⊆ U .
(α,θ)
Moreover, for any parameters α ∈ (1/2, 1] and θ ∈ [0, log 2], the sets X ent and X ent are
well-defined subsets of U and satisfy
(α,θ)
X ent ⊆ X ⊆ X ent .
Proof. Since R is an equivalence relation on the finite set U , each block B ∈ U /R is nonempty
and finite. Hence |B| &gt; 0 and the ratio pB (X) = |B ∩ X|/|B| ∈ [0, 1] is well-defined.
For the entropy term, the convention 0 log 0 := 0 makes each summand well-defined at the
endpoints pB (X) ∈ {0, 1}. For pB (X) ∈ (0, 1) the expression is standard and finite. Hence
HB (X) is well-defined for all B and X .
By definition,
n
(α,θ)
X ent = x ∈ X
o
p[x]R (X) ≥ α and H[x]R (X) ≤ θ ,
so X ent ⊆ X holds immediately. Also, for any x ∈ X we have [x]R ∩X 6= ∅ (since x ∈ [x]R ∩X ),
hence x ∈ X = X ent . Therefore X ⊆ X ent . All sets are defined by unambiguous comprehension
over U , so they are well-defined subsets of U .
(α,θ)
Example 2.53.4 (Manufacturing quality control with entropy-filtered certainty). Let U =
{1, 2, . . . , 10} be produced items in a factory shift. Define an equivalence relation R by
iRj
:⇐⇒
items i and j were produced on the same machine under the same supplier lot.
Assume this yields two blocks:
B1 = {1, 2, 3, 4, 5},
B2 = {6, 7, 8, 9, 10}.


# Page. 103

![Page Image](https://bcdn.docswell.com/page/Y79PXP5DE3.jpg)

Chapter 2. Types of Rough Set
102
Let the concept X ⊆ U be the set of defective items found by inspection:
X = {1, 2, 3, 4, 6}.
Thus
pB1 (X) =
|B1 ∩ X|
4
= = 0.8,
|B1 |
5
pB2 (X) =
|B2 ∩ X|
1
= = 0.2.
|B2 |
5
The binary entropies (natural logarithm) are
HB1 (X) = −0.8 ln(0.8)−0.2 ln(0.2) ≈ 0.5004,
HB2 (X) = −0.2 ln(0.2)−0.8 ln(0.8) ≈ 0.5004.
Scenario. Set thresholds α = 0.75 and θ = 0.55 (high purity, low uncertainty). By Definition 2.53.1,
n
o
(α,θ)
X ent = x ∈ X
p[x]R (X) ≥ α, H[x]R (X) ≤ θ .
Since pB1 (X) = 0.8 ≥ 0.75 and HB1 (X) ≈ 0.5004 ≤ 0.55, but pB2 (X) = 0.2 &lt; 0.75, we obtain
(0.75,0.55)
X ent
= X ∩ B1 = {1, 2, 3, 4}.
The (classical) upper approximation is
X ent = X = {x ∈ U | [x]R ∩ X 6= ∅} = B1 ∪ B2 = U,
because each block contains at least one defective item.
Interpretation: the entropy-regularized lower region isolates robust defect evidence coming from
a highly defective block (machine/lot B1 ), while the upper region flags all possibly affected items
(both blocks).
2.54 Differentially-Private Rough Set
Differentially-Private Rough Set estimates lower and upper approximations from privatized data,
adding noise for privacy, then selecting elements with high membership confidence under random
mechanisms.
Definition 2.54.1 (Differentially-Private Rough Set). Let U be a finite universe and let R be
an equivalence relation on U (or any neighborhood relation), so that rough approximations of a
concept X ⊆ U can be computed from data-dependent statistics (e.g., block counts |[x]R ∩ X|).
Let M be a randomized mechanism that produces a privatized view of the data and satisfies
(εDP , δDP )-differential privacy with respect to a chosen adjacency model. Using the privatized
output of M , suppose we compute randomized approximations
X M ⊆ U,
X
M
⊆ U,
which are random sets (measurable with respect to the randomness of M ). Fix a robustness
level η ∈ (0, 1) and define the DP-robust lower and DP-robust upper sets by
n
o
n
o

(η)
M
(η)
X DP := x ∈ U
Pr x ∈ X M ≥ 1 − η ,
X DP := x ∈ U
Pr x ∈ X
≥1−η .
(η) 
(η)
The pair X DP , X DP is called the Differentially-Private Rough Set (DP-Rough Set) of X
induced by the mechanism M .


# Page. 104

![Page Image](https://bcdn.docswell.com/page/G78D2DY57D.jpg)

103
Chapter 2. Types of Rough Set
Remark 2.54.2. Unlike classical rough sets, DP-Rough sets are robust estimates derived from
privatized data. The parameter η controls the confidence with which membership is asserted
under the mechanism noise.
Proposition 2.54.3 (Well-definedness of the Differentially-Private Rough Set). In Definition 2.54.1,
assume that the randomized mechanism M is defined on a probability space (Ω, F, P), and that
M
the randomized approximations X M (ω) and X (ω) are set-valued random variables such that,
M
for each x ∈ U , the events {ω | x ∈ X M (ω)} and {ω | x ∈ X (ω)} are measurable. Then, for
(η)
every η ∈ (0, 1), the sets X DP and X DP are well-defined subsets of U .
(η)
If moreover X M (ω) ⊆ X
M
(ω) for all ω ∈ Ω, then
(η)
(η)
X DP ⊆ X DP .
Proof. For each fixed x ∈ U , measurability of the event {x ∈ X M } guarantees that the probaM
bility P(x ∈ X M ) is well-defined; similarly for X . Hence the membership conditions
P(x ∈ X M ) ≥ 1 − η,
P(x ∈ X
M
)≥1−η
are unambiguous, and therefore
(η)
(η)
X DP := {x ∈ U | P(x ∈ X M ) ≥ 1 − η},
X DP := {x ∈ U | P(x ∈ X
M
) ≥ 1 − η}
are well-defined subsets of U .
M
Assume in addition that X M (ω) ⊆ X (ω) for all ω . Then for each x ∈ U we have pointwise
M
implication x ∈ X M (ω) ⇒ x ∈ X (ω), hence
P(x ∈ X
M
) ≥ P(x ∈ X M ).
Therefore, if x ∈ X DP then P(x ∈ X M ) ≥ 1 − η implies P(x ∈ X
(η)
M
(η)
Thus X DP ⊆ X DP .
(η)
(η)
) ≥ 1 − η , so x ∈ X DP .
Example 2.54.4 (Publishing hotspot regions under differential privacy). Let U = {z1 , z2 , z3 }
be city zip codes. Let X = {z1 , z2 } be the (non-public) set of true high-incidence areas for a
disease.
A public-health agency applies a randomized mechanism M (e.g., Laplace noise on case counts
followed by thresholding), and releases a privatized classification that induces random lower/upper sets
M
X M (ω) ⊆ U,
X (ω) ⊆ U.
Assume the induced membership probabilities (derivable from the noise model) are:
P(z1 ∈ X M ) = 0.95,
and
P(z1 ∈ X
M
) = 0.98,
P(z2 ∈ X M ) = 0.92,
P(z2 ∈ X
M
) = 0.97,
P(z3 ∈ X M ) = 0.08,
P(z3 ∈ X
M
) = 0.40.


# Page. 105

![Page Image](https://bcdn.docswell.com/page/L7LM2M43JR.jpg)

Chapter 2. Types of Rough Set
104
Scenario. Choose robustness η = 0.10 (90% confidence). By Definition 2.54.1,
(0.10)
X DP
while
(0.10)
X DP
= {z ∈ U | P(z ∈ X M ) ≥ 0.90} = {z1 , z2 },
= {z ∈ U | P(z ∈ X
M
) ≥ 0.90} = {z1 , z2 }.
If the agency instead uses a looser robustness level η = 0.60 (40% confidence), then
(0.60)
X DP
= {z1 , z2 , z3 },
reflecting that z3 is possibly a hotspot under the privatized release, although not robustly so.
Interpretation: the DP-rough approximations describe what can be asserted about hotspots with
high confidence given privacy-induced randomness.


# Page. 106

![Page Image](https://bcdn.docswell.com/page/4EMY8YKMEW.jpg)

Chapter 3
Uncertain Rough Set
In this chapter, we introduce and discuss several variants of uncertain rough sets.
3.1 Fuzzy Rough Set
Fuzzy rough sets approximate a concept using fuzzy similarity relations, yielding fuzzy lower and
upper memberships for uncertain classification tasks [229, 231–233]. Related concepts include
Picture fuzzy rough sets [234, 235], Hesitant Fuzzy rough sets [236–238], Bipolar fuzzy rough
sets [239–242], Linear diophantine fuzzy rough sets [243–245], Multipolar fuzzy rough sets [246],
Variable precision fuzzy rough sets [247, 248], Soft fuzzy rough sets [232, 249, 250], Robust fuzzy
rough sets [251, 252], and Spherical fuzzy rough sets [253–255].
Definition 3.1.1 (Fuzzy rough approximations and fuzzy rough set). [229, 231] Let U be a
nonempty universe and let R : U × U → [0, 1] be a fuzzy relation. Let T be a t-norm, S
a t-conorm, and let N : [0, 1] → [0, 1] be the standard negator N (a) = 1 − a. Define the
(S-)implicator

IS (a, b) := S N (a), b
(a, b ∈ [0, 1]).
For a fuzzy set A on U (identified with its membership function µA : U → [0, 1]), the fuzzy
rough lower and fuzzy rough upper approximations of A w.r.t. R are the fuzzy sets R(A) and
R(A) whose membership functions are


µR(A) (x) := inf IS R(x, y), µA (y) = inf S 1 − R(x, y), µA (y) ,
y∈U
y∈U

µR(A) (x) := sup T R(x, y), µA (y)
(x ∈ U ).
y∈U

The pair R(A), R(A) is called the fuzzy rough set induced by A (w.r.t. R).
Example 3.1.2 (Fuzzy rough set for identifying “loyal customers” from similarity of purchase
behavior). Let U = {c1 , c2 , c3 } be three customers in an online store. We consider the fuzzy
concept
A = “loyal customer”
105


# Page. 107

![Page Image](https://bcdn.docswell.com/page/PER959XLJ9.jpg)

Chapter 3. Uncertain Rough Set
106
modeled by the membership function
µA (c1 ) = 0.9,
µA (c2 ) = 0.5,
µA (c3 ) = 0.2.
Assume a fuzzy similarity relation R : U × U → [0, 1] derived from similarity of purchase
patterns:
R(x, y) c1 c2 c3
c1
1.0 0.7 0.3
c2
0.7 1.0 0.6
c3
0.3 0.6 1.0
Choose the standard negator N (a) = 1 − a, the t-norm T = min, and the t-conorm S = max.
Then the S -implicator is
IS (a, b) = S(1 − a, b) = max(1 − a, b).
Since U is finite, inf = min and sup = max, hence for each x ∈ U ,


µR(A) (x) = min max 1 − R(x, y), µA (y) ,
µR(A) (x) = max min R(x, y), µA (y) .
y∈U
y∈U
We compute the approximations explicitly.
(i) Upper approximation.
µR(A) (c1 ) = max{min(1, 0.9), min(0.7, 0.5), min(0.3, 0.2)} = max{0.9, 0.5, 0.2} = 0.9,
µR(A) (c2 ) = max{min(0.7, 0.9), min(1, 0.5), min(0.6, 0.2)} = max{0.7, 0.5, 0.2} = 0.7,
µR(A) (c3 ) = max{min(0.3, 0.9), min(0.6, 0.5), min(1, 0.2)} = max{0.3, 0.5, 0.2} = 0.5.
(ii) Lower approximation.
µR(A) (c1 ) = min{max(0, 0.9), max(0.3, 0.5), max(0.7, 0.2)} = min{0.9, 0.5, 0.7} = 0.5,
µR(A) (c2 ) = min{max(0.3, 0.9), max(0, 0.5), max(0.4, 0.2)} = min{0.9, 0.5, 0.4} = 0.4,
µR(A) (c3 ) = min{max(0.7, 0.9), max(0.4, 0.5), max(0, 0.2)} = min{0.9, 0.5, 0.2} = 0.2.
Therefore,
R(A) = {(c1 , 0.5), (c2 , 0.4), (c3 , 0.2)},
R(A) = {(c1 , 0.9), (c2 , 0.7), (c3 , 0.5)},

and the fuzzy rough set induced by A (w.r.t. R) is the pair R(A), R(A) .
Interpretation. R(A) gives a conservative loyalty score that must be supported across all similar
customers, while R(A) gives a permissive score supported by at least one similar customer.


# Page. 108

![Page Image](https://bcdn.docswell.com/page/P7XQKQM6EX.jpg)

107
Chapter 3. Uncertain Rough Set
3.2 Intuitionistic Fuzzy Rough Set
An intuitionistic fuzzy set assigns to each element both a membership degree and a nonmembership degree in [0, 1], with their sum at most 1, thereby explicitly representing hesitation under
incomplete information in decision-making [3, 256]. An intuitionistic fuzzy rough set then combines this intuitionistic fuzzy description with a suitable intuitionistic fuzzy relation to compute
lower and upper intuitionistic fuzzy approximations of a concept, producing bounded uncertainty regions for classification and analysis [257–259]. Related concepts also include generalized
intuitionistic fuzzy rough sets [257] and intuitionistic hesitant fuzzy rough sets [260].
Definition 3.2.1 (Intuitionistic fuzzy rough approximations and IF rough set). Let U be a
nonempty universe. An intuitionistic fuzzy set (IF set) on U is a pair X = (µX , γX ) of functions
µX , γX : U → [0, 1] satisfying µX (x) + γX (x) ≤ 1 for all x ∈ U ; µX (x) and γX (x) are the
membership and nonmembership degrees, respectively.
An intuitionistic fuzzy (binary) relation on U is a pair R = (µR , γR ) with µR , γR : U ×U → [0, 1]
and µR (x, y) + γR (x, y) ≤ 1 for all (x, y) ∈ U × U .
For any IF set X = (µX , γX ), define the lower and upper approximations of X w.r.t. R as IF
sets


R(X) = µR(X) , γR(X) ,
R(X) = µR(X) , γR(X) ,
where for each x ∈ U ,

µR(X) (x) := inf max γR (x, y), µX (y) ,
y∈U

γR(X) (x) := sup min µR (x, y), γX (y) ,
y∈U

µR(X) (x) := sup min µR (x, y), µX (y) ,
y∈U

γR(X) (x) := inf max γR (x, y), γX (y) .
y∈U

If R(X) = R(X), then X is IF definable; otherwise R(X), R(X) is called an intuitionistic
fuzzy rough set (IF rough set).
Example 3.2.2 (Intuitionistic fuzzy rough set for loan-default risk (similarity of applicants)).
Let U = {a, b, c} be three loan applicants. We model the concept
X = “high default risk”
as an intuitionistic fuzzy (IF) set X = (µX , γX ) on U :
x
a
b
c
µX (x) 0.80 0.40 0.20
γX (x) 0.10 0.40 0.70
(µX (x) + γX (x) ≤ 1 for all x).
Interpretation: µX (x) is evidence that x is high-risk, while γX (x) is evidence that x is not
high-risk.
Next, define an intuitionistic fuzzy relation R = (µR , γR ) on U encoding “financial-profile similarity” (e.g., similar income–debt patterns). Let µR and γR be symmetric, with µR (x, x) = 1
and γR (x, x) = 0, and the following off-diagonal values:
µR
a
b
c
a 1.00 0.70 0.30
b 0.70 1.00 0.50
c 0.30 0.50 1.00
γR
a
b
c
a 0.00 0.20 0.60
b 0.20 0.00 0.30
c 0.60 0.30 0.00


# Page. 109

![Page Image](https://bcdn.docswell.com/page/37K959GG7D.jpg)

Chapter 3. Uncertain Rough Set
108
(note µR (x, y) + γR (x, y) ≤ 1 everywhere).
Using Definition of IF rough approximations, for each x ∈ U (since U is finite, inf = min and
sup = max),


µR(X) (x) = min max γR (x, y), µX (y) ,
γR(X) (x) = max min µR (x, y), γX (y) ,
y∈U
y∈U

µR(X) (x) = max min µR (x, y), µX (y) ,
y∈U

γR(X) (x) = min max γR (x, y), γX (y) .
y∈U
For instance, at x = a,
µR(X) (a) = min{max(0, 0.80), max(0.20, 0.40), max(0.60, 0.20)} = min{0.80, 0.40, 0.60} = 0.40,
γR(X) (a) = max{min(1, 0.10), min(0.70, 0.40), min(0.30, 0.70)} = max{0.10, 0.40, 0.30} = 0.40,
µR(X) (a) = max{min(1, 0.80), min(0.70, 0.40), min(0.30, 0.20)} = max{0.80, 0.40, 0.20} = 0.80,
γR(X) (a) = min{max(0, 0.10), max(0.20, 0.40), max(0.60, 0.70)} = min{0.10, 0.40, 0.70} = 0.10.
Carrying out the same finite min/max computations for b and c yields:
R(X)
R(X)
x µR(X) (x) γR(X) (x) µR(X) (x) γR(X) (x)
a
0.40
0.40
0.80
0.10
b
0.30
0.50
0.70
0.20
c
0.20
0.70
0.40
0.40

Hence R(X) 6= R(X), so R(X), R(X) is an intuitionistic fuzzy rough set. Intuitively, similarity
between applicants propagates both risk-evidence and non-risk-evidence, producing conservative
(lower) and permissive (upper) IF assessments.
3.3
Vague Rough Set
A vague set assigns to each element an interval-valued membership [t(x), 1 − f (x)] determined
by the degree of supporting evidence t(x) and opposing evidence f (x) [261, 262]. A vague rough
set combines Pawlak rough approximations with such interval-valued memberships: elements
in the lower approximation are treated as certainly in the concept, elements outside the upper
approximation as certainly out, and elements in the boundary region are represented by genuinely
vague membership intervals [263, 264].
Definition 3.3.1 (Vague set). [261, 262] Let U be a nonempty universe. A vague set V in U
is specified by two functions
tV , fV : U → [0, 1] with tV (x) + fV (x) ≤ 1 (x ∈ U ).
For each x ∈ U , the (interval-valued) membership of x in V is bounded by
ηV (x) ∈ [ tV (x), 1 − fV (x) ] ⊆ [0, 1].


# Page. 110

![Page Image](https://bcdn.docswell.com/page/LJ3WKWN5J5.jpg)

109
Chapter 3. Uncertain Rough Set
Definition 3.3.2 (Vague rough set). [263, 264] Let (U, R) be an approximation space and let
X ⊆ U . A vague rough set (induced by X under R) is a pair of functions
µ, ν : U → [0, 1]
(called membership and non-membership) satisfying:
(certainly in)
x ∈ R(X) =⇒ [µ(x), 1 − ν(x)] = [1, 1],
(certainly out)
(boundary/vague)
x ∈ U \ R(X) =⇒ [µ(x), 1 − ν(x)] = [0, 0],
x ∈ BNDR (X) =⇒ 0 ≤ µ(x) + ν(x) ≤ 1.
Equivalently, each x ∈ U is assigned a vague membership interval
[µ(x), 1 − ν(x)] ⊆ [0, 1],
which is crisp on R(X) and U \ R(X) and may be genuinely vague on the boundary BNDR (X).
Remark 3.3.3 (Alternative (lower/upper vague sets)). One may also regard a vague rough set as
a pair of vague sets on the rough approximations: given a rough set (XL , XU ) = (R(X), R(X)),
define vague sets VL on XL and VU on XU via truth/false membership functions
tL , fL : XL → [0, 1],
tU , fU : XU → [0, 1],
with the pointwise constraints tL ≤ 1 − fL , tU ≤ 1 − fU , and a natural coherence such as
1 − fL (y) ≤ 1 − fU (y) for y ∈ XU . This viewpoint emphasizes that “certain” and “possible”
regions each carry their own vague evidence.
Example 3.3.4 (Age-group concept with vagueness on the boundary). Let
U = {Child, Pre-Teen, Teen, Youth, Teenager, Young-Adult, Adult, Senior, Senior-Citizen, Elderly}.
Define an equivalence relation R (indiscernibility) whose classes are
{Child, Pre-Teen}, {Teen, Youth, Teenager}, {Young-Adult}, {Adult}, {Senior, Senior-Citizen, Elderly}.
Consider the target (crisp) concept
X = {Child, Pre-Teen, Youth, Young-Adult} ⊆ U.
Then
R(X) = {Child, Pre-Teen, Young-Adult},
R(X) = {Child, Pre-Teen, Teen, Youth, Teenager, Young-Adult},
so the boundary is BNDR (X) = {Teen, Youth, Teenager}.
Define a vague rough set by specifying (µ, ν) as follows:
u
µ(u) ν(u) [µ(u), 1 − ν(u)]
Child
1
0
[1, 1]
Pre-Teen
1
0
[1, 1]
Young-Adult
1
0
[1, 1]
Adult
0
1
[0, 0]
Senior
0
1
[0, 0]
Senior-Citizen
0
1
[0, 0]
Elderly
0
1
[0, 0]
Teen
0.3
0.5
[0.3, 0.5]
Youth
0.5
0.3
[0.5, 0.7]
Teenager
0.4
0.4
[0.4, 0.6]
Here the certain region R(X) has crisp value [1, 1], the certainly-out region U \ R(X) has crisp
value [0, 0], and the boundary elements carry genuine vagueness with µ(u) + ν(u) ≤ 1.


# Page. 111

![Page Image](https://bcdn.docswell.com/page/8JDK3KWYEG.jpg)

Chapter 3. Uncertain Rough Set
3.4
110
Neutrosophic Rough Set
A neutrosophic set assigns each element independent truth, indeterminacy, and falsity degrees
in [0, 1], modeling incomplete information and contradictory evidence [6, 7]. Neutrosophic rough
sets form lower/upper neutrosophic approximations by taking inf/sup of the truth, indeterminacy, and falsity degrees over each equivalence class [182, 265–268]. Related concepts also
include bipolar neutrosophic rough sets [269], quadripartitioned neutrosophic rough sets [270],
generalized Neutrosophic Rough Sets [266, 271, 272], and pentapartitioned neutrosophic rough
sets [273–275].
Definition 3.4.1 (Neutrosophic rough approximations and neutrosophic rough set). Let U
be a nonempty universe and let R ⊆ U × U be an equivalence relation. For x ∈ U , write
[x]R := {y ∈ U | (x, y) ∈ R}.
A single-valued neutrosophic set on U is a triple A = (TA , IA , FA ) of functions TA , IA , FA : U →
[0, 1], interpreted as truth-, indeterminacy-, and falsity-membership degrees.
Define the neutrosophic lower and neutrosophic upper approximations of A w.r.t. R by the
neutrosophic sets R(A) = (TR(A) , IR(A) , FR(A) ) and R(A) = (TR(A) , IR(A) , FR(A) ) where, for
each x ∈ U ,
TR(A) (x) := inf TA (y),
y∈[x]R
TR(A) (x) := sup TA (y),
IR(A) (x) := inf IA (y),
y∈[x]R
IR(A) (x) := sup IA (y),
y∈[x]R
FR(A) (x) := sup FA (y),
y∈[x]R
FR(A) (x) := inf FA (y).
y∈[x]R
y∈[x]R
The pair R(A), R(A) is called the neutrosophic rough set induced by A (w.r.t. R).

Example 3.4.2 (Neutrosophic rough set in medical triage (influenza suspicion)). Let U =
{p1 , p2 , p3 , p4 , p5 } be five patients in an emergency department. We model the neutrosophic
concept
A = “the patient has influenza”
as a single-valued neutrosophic set A = (TA , IA , FA ) on U , where: TA comes from a rapid antigen
score (evidence-for), IA represents indeterminacy due to sample quality / timing, and FA comes
from evidence-against (e.g., strong alternative diagnosis indicators).
Suppose patients are grouped by the same coarse symptom-profile (e.g., high fever &amp; cough vs.
mild fever &amp; cough, etc.). Define an equivalence relation R on U with classes
[p1 ]R = [p2 ]R = {p1 , p2 },
[p3 ]R = [p4 ]R = {p3 , p4 },
Assume the neutrosophic membership degrees are:
x TA (x) IA (x) FA (x)
p1 0.80
0.20
0.10
p2 0.60
0.40
0.30
p3 0.30
0.50
0.40
p4 0.40
0.30
0.50
p5 0.10
0.20
0.80
[p5 ]R = {p5 }.


# Page. 112

![Page Image](https://bcdn.docswell.com/page/VEPK4K9278.jpg)

111
Chapter 3. Uncertain Rough Set
By Definition of neutrosophic rough approximations, for each x ∈ U ,
TR(A) (x) = inf TA (y),
y∈[x]R
TR(A) (x) = sup TA (y),
y∈[x]R
IR(A) (x) = inf IA (y),
y∈[x]R
IR(A) (x) = sup IA (y),
FR(A) (x) = sup FA (y),
y∈[x]R
FR(A) (x) = inf FA (y).
y∈[x]R
y∈[x]R
Class {p1 , p2 }. For x ∈ {p1 , p2 },

R(A)(x) = min{0.80, 0.60}, min{0.20, 0.40}, max{0.10, 0.30} = (0.60, 0.20, 0.30),

R(A)(x) = max{0.80, 0.60}, max{0.20, 0.40}, min{0.10, 0.30} = (0.80, 0.40, 0.10).
Class {p3 , p4 }. For x ∈ {p3 , p4 },

R(A)(x) = min{0.30, 0.40}, min{0.50, 0.30}, max{0.40, 0.50} = (0.30, 0.30, 0.50),

R(A)(x) = max{0.30, 0.40}, max{0.50, 0.30}, min{0.40, 0.50} = (0.40, 0.50, 0.40).
Singleton class {p5 }. For x = p5 ,
R(A)(p5 ) = R(A)(p5 ) = A(p5 ) = (0.10, 0.20, 0.80).
Therefore, the neutrosophic rough set induced by A (w.r.t. R) is

R(A), R(A) .
Interpretation: within each symptom-profile class, R(A) provides a conservative (worst-case)
assessment of influenza (lower truth/indeterminacy and higher falsity), while R(A) gives a permissive (best-case) assessment consistent with at least one patient in the class.
3.5 Plithogenic Rough Set
A plithogenic set models element appurtenance to multiple attribute values, weighted by contradiction degrees, and aggregates memberships using tailored operators [276–278]. Plithogenic
rough sets form lower/upper approximations by taking componentwise meet/join of the appurtenance vectors over each equivalence class, while retaining the attribute values and contradiction
function [279].
Definition 3.5.1 (Plithogenic rough set). [280] Let U be a nonempty finite universe and let
R ⊆ U × U be an equivalence relation. For each x ∈ U , write
[x]R := { y ∈ U | (x, y) ∈ R }
for the R-equivalence class of x.


# Page. 113

![Page Image](https://bcdn.docswell.com/page/27VVXVMX7Q.jpg)

Chapter 3. Uncertain Rough Set
112
Fix an attribute a with a nonempty finite value set Va . Let s, t ∈ N. A plithogenic set on U
(with respect to a) is a quintuple
PS = (U, a, Va , pdf, pcf),
where pdf : U × Va → [0, 1]s is the degree of appurtenance function and pcf : Va × Va → [0, 1]t
is the degree of contradiction function satisfying
pcf(v, v) = 0,
pcf(v, w) = pcf(w, v)
(v, w ∈ Va ).
Equip [0, 1]s with the product (componentwise) order ≤ and lattice operations
(u ∧ w)j := min{uj , wj },
(u ∨ w)j := max{uj , wj }
(1 ≤ j ≤ s),
for u = (u1 , . . . , us ) and w = (w1 , . . . , ws ).
For each v ∈ Va , define the R-lower and R-upper plithogenic rough approximations of PS by
the maps pdfR , pdfR : U × Va → [0, 1]s :
pdfR (x, v) :=
^
pdf(y, v),
pdfR (x, v) :=
y∈[x]R
_
pdf(y, v)
(x ∈ U, v ∈ Va ).
y∈[x]R
Then the plithogenic rough set induced by (U, R) and PS is the pair

PSR , PSR ,
where
PSR := (U, a, Va , pdfR , pcf),
PSR := (U, a, Va , pdfR , pcf).
Example 3.5.2 (Plithogenic rough set for product color with contradictory labels). Let U =
{u1 , u2 , u3 , u4 , u5 } be a set of T-shirts in an online catalog. Assume the indiscernibility relation
R groups items by the same manufacturer and fabric type, yielding the equivalence classes
[u1 ]R = [u2 ]R = {u1 , u2 },
[u3 ]R = [u4 ]R = {u3 , u4 },
[u5 ]R = {u5 }.
We study one attribute a =“color” with value set
Va = {Red, Orange, Brown}.
Take s = 2 and interpret pdf(x, v) = (µ1 (x, v), µ2 (x, v)) ∈ [0, 1]2 as a two-source appurtenance
vector, e.g. (µ1 , µ2 ) = (image-based classifier score, human-tagging score).
Define pdf : U × Va → [0, 1]2 by the following values (listed by v ∈ Va ):
u1
u2
u3
u4
u5
Red
(0.90, 0.80)
(0.60, 0.70)
(0.10, 0.10)
(0.20, 0.10)
(0.30, 0.20)
Orange
(0.20, 0.10)
(0.50, 0.40)
(0.70, 0.60)
(0.60, 0.50)
(0.20, 0.30)
Brown
(0.10, 0.20)
(0.20, 0.20)
(0.40, 0.50)
(0.50, 0.60)
(0.80, 0.90)


# Page. 114

![Page Image](https://bcdn.docswell.com/page/5JGLVLGR7L.jpg)

113
Chapter 3. Uncertain Rough Set
Define a contradiction function pcf : Va × Va → [0, 1] (so t = 1) encoding how incompatible two
color labels are:
pcf(v, v) = 0,
pcf(Red, Orange) = 0.20,
pcf(Orange, Brown) = 0.30,
pcf(Red, Brown) = 0.80,
and extend symmetrically, e.g. pcf(Orange, Red) = 0.20.
Hence the plithogenic set is
PS = (U, a, Va , pdf, pcf).
The plithogenic R-lower and R-upper rough approximations are computed componentwise via
∧ = min and ∨ = max in [0, 1]2 :
pdfR (x, v) =
^
pdf(y, v),
y∈[x]R
pdfR (x, v) =
_
pdf(y, v).
y∈[x]R
For instance, in the class {u1 , u2 } we obtain:
pdfR (u1 , Red) = pdfR (u2 , Red) = (min{0.90, 0.60}, min{0.80, 0.70}) = (0.60, 0.70),
pdfR (u1 , Red) = pdfR (u2 , Red) = (max{0.90, 0.60}, max{0.80, 0.70}) = (0.90, 0.80),
and similarly
pdfR (u1 , Orange) = pdfR (u2 , Orange) = (0.20, 0.10),
pdfR (u1 , Orange) = pdfR (u2 , Orange) = (0.50, 0.40).
In the class {u3 , u4 }, for Brown:
pdfR (u3 , Brown) = pdfR (u4 , Brown) = (min{0.40, 0.50}, min{0.50, 0.60}) = (0.40, 0.50),
pdfR (u3 , Brown) = pdfR (u4 , Brown) = (max{0.40, 0.50}, max{0.50, 0.60}) = (0.50, 0.60).
Finally, for the singleton class {u5 } we have pdfR (u5 , v) = pdfR (u5 , v) = pdf(u5 , v).
Therefore the induced plithogenic rough set is the pair

PSR , PSR ,
where
PSR = (U, a, Va , pdfR , pcf), PSR = (U, a, Va , pdfR , pcf).
Within each manufacturer–fabric class, pdfR gives a conservative (class-consistent) vector score
for each color value, while pdfR gives a permissive score capturing any evidence in the class. The
contradiction map pcf records that confusing Red with Brown is far more contradictory than
confusing Red with Orange, which can be used later in plithogenic aggregation rules.


# Page. 115

![Page Image](https://bcdn.docswell.com/page/47QY6YXYEP.jpg)

Chapter 3. Uncertain Rough Set
3.6
114
Uncertain Rough Set
An Uncertain Set is a generic way to attach “uncertainty values” to elements, where the values live
in a chosen degree-domain [281]. By selecting an appropriate degree-domain, one recovers fuzzy,
intuitionistic fuzzy, neutrosophic, plithogenic, and many related models as special cases [281].
Definition 3.6.1 (Uncertainty model / degree-domain). An uncertainty model M consists of a
bounded De Morgan lattice

Dom(M ), ≤M , ⊕M , ⊗M , NM , 0M , 1M ,
where ⊕M and ⊗M are the join and meet, NM is an order-reversing involution (De Morgan
complement), and 0M , 1M are the least and greatest elements. For a finite family {ai }ni=1 ⊆
Dom(M ) we write
n
_
ai := a1 ⊕M · · · ⊕M an ,
i=1
n
^
ai := a1 ⊗M · · · ⊗M an .
i=1
Definition 3.6.2 (M -implication (Kleene–Dienes type)). For a, b ∈ Dom(M ) define
a ⇒M b := NM (a) ⊕M b.
Definition 3.6.3 (M -valued relation). Let X be a nonempty finite universe. An M -valued
relation (or M -relation) on X is a mapping
RM : X × X −→ Dom(M ).
(Optionally one may assume M -reflexivity RM (x, x) = 1M and/or M -symmetry, etc., depending
on the application.)
Definition 3.6.4 (Uncertain Set (U-Set) of type M ). An Uncertain Set of type M on X is a
mapping
µA : X −→ Dom(M ).
Definition 3.6.5 (Uncertain Rough approximations). Let X be finite, let M be as in Definition 3.6.1, let RM be an M -relation on X , and let A be a U-Set of type M with membership
µA . Define the M -lower and M -upper rough approximations of A (with respect to RM ) by

^
µRM (A) (x) :=
RM (x, y) ⇒M µA (y) ,
y∈X
µRM (A) (x) :=
_

RM (x, y) ⊗M µA (y) ,
(x ∈ X).
y∈X
The induced uncertain rough set (of A w.r.t. RM ) is the pair
URM (A) :=

RM (A), RM (A) .
If one wants region-style objects, define pointwise (for x ∈ X )
µPOSM (A) (x) := µRM (A) (x),

µNEGM (A) (x) := NM µRM (A) (x) ,

µBNDM (A) (x) := µRM (A) (x) ⊗M NM µRM (A) (x) .


# Page. 116

![Page Image](https://bcdn.docswell.com/page/KE4W4W6ZJ1.jpg)

115
Chapter 3. Uncertain Rough Set
Example 3.6.6 (Uncertain rough approximations in anomaly screening (fuzzy model M )). Let
X = {x1 , x2 , x3 } be three servers in a small network. Suppose A is the uncertain concept
“compromised (anomalous) server,” obtained from a noisy detector, with membership degrees
µA (x1 ) = 0.9,
µA (x2 ) = 0.6,
µA (x3 ) = 0.2.
We instantiate the uncertainty model M by the standard fuzzy lattice
Dom(M ) = [0, 1],
≤M =≤,
⊗M = min,
⊕M = max,
NM (a) = 1 − a,
and we take the (S-)implication
a ⇒M b := max(1 − a, b)
(a, b ∈ [0, 1]).
Let RM : X × X → [0, 1] be a similarity relation between servers (e.g., from traffic-profile
similarity):
RM (xi , xj ) x1 x2 x3
x1
1.0 0.7 0.3
x2
0.7 1.0 0.5
x3
0.3 0.5 1.0
(which is reflexive and symmetric).
V
W
By Definition 3.6.5, since X is finite we may read and as min and max, respectively. Hence,
for each x ∈ X ,


µRM (A) (x) = min RM (x, y) ⇒M µA (y) ,
y∈X

µRM (A) (x) = max min RM (x, y), µA (y) .
y∈X
A direct calculation gives:
µRM (A) (x1 ) = 0.6,
µRM (A) (x2 ) = 0.5,
µRM (A) (x3 ) = 0.2,
µRM (A) (x1 ) = 0.9,
µRM (A) (x2 ) = 0.7,
µRM (A) (x3 ) = 0.5.
Therefore the uncertain rough set of A w.r.t. RM is

URM (A) = RM (A), RM (A) ,
where RM (A) and RM (A) are the uncertain sets with the above memberships.
If we also form region-style uncertain sets (using NM (a) = 1 − a and ⊗M = min), then
µNEGM (A) (x) = 1 − µRM (A) (x),

µBNDM (A) (x) = min µRM (A) (x), 1 − µRM (A) (x) ,
µPOSM (A) = µRM (A) ,
so in particular
µNEGM (A) (x1 ) = 0.1, µNEGM (A) (x2 ) = 0.3, µNEGM (A) (x3 ) = 0.5,
µBNDM (A) (x1 ) = 0.4, µBNDM (A) (x2 ) = 0.5, µBNDM (A) (x3 ) = 0.5.
Interpretation. RM (A) is a conservative (certainty-oriented) anomaly score that requires consistency across similar servers, whereas RM (A) is permissive (possibility-oriented), propagating
suspicion along similarity links.


# Page. 117

![Page Image](https://bcdn.docswell.com/page/L71Y4YVDJG.jpg)

Chapter 3. Uncertain Rough Set
116
Theorem 3.6.7 (Uncertain rough sets generalize several classical models). Under the assumptions of Definition 3.6.5:
(a) ( Closure / well-definedness) RM (A) and RM (A) are U-Sets of type M .
(b) ( U-Sets are recovered as a special case) Let ∆M be the M -identity relation
(
1M , x = y,
∆M (x, y) :=
0M , x 6= y.
Then for every U-Set A of type M ,
∆M (A) = A = ∆M (A).
(c) ( Pawlak rough sets are recovered) Take Dom(M ) = {0, 1} with ⊕M = ∨, ⊗M = ∧, and
NM (a) = 1 − a. Let E ⊆ X × X be a crisp equivalence relation and define its characteristic
map
(
1, (x, y) ∈ E,
RM (x, y) :=
0, (x, y) ∈
/ E.
Identify any crisp set A ⊆ X with its characteristic function µA : X → {0, 1}. Then RM (A)
and RM (A) are exactly the Pawlak lower and upper approximations of A in the approximation space (X, E), and POS, NEG, BND reduce to the usual positive/negative/boundary
regions.
(d) ( Fuzzy rough sets are recovered) Take Dom(M ) = [0, 1], ⊕M = max, ⊗M = min, NM (a) =
1 − a. Let RM : X × X → [0, 1] be a fuzzy similarity relation and let µA : X → [0, 1] be a
fuzzy set. Then

µRM (A) (x) = inf max 1 − RM (x, y), µA (y) ,
y∈X

µRM (A) (x) = sup min RM (x, y), µA (y) ,
y∈X
i.e., the standard fuzzy-rough approximations based on Kleene–Dienes implication and min–
max connectives.
(e) ( Single-valued neutrosophic rough sets are recovered) Take Dom(M ) = [0, 1]3 with componentwise order, componentwise join/meet
(t, i, f ) ⊕M (t0 , i0 , f 0 ) = (max(t, t0 ), max(i, i0 ), max(f, f 0 )),
(t, i, f ) ⊗M (t0 , i0 , f 0 ) = (min(t, t0 ), min(i, i0 ), min(f, f 0 )),
and neutrosophic complement
NM (t, i, f ) = (f, 1 − i, t).
Let RM : X × X → [0, 1]3 be a single-valued neutrosophic relation and µA : X → [0, 1]3 a
single-valued neutrosophic set. Then Definition 3.6.5 yields neutrosophic lower/upper rough
approximations (componentwise) and hence a neutrosophic rough set model.


# Page. 118

![Page Image](https://bcdn.docswell.com/page/G7WGXGL8E2.jpg)

117
Chapter 3. Uncertain Rough Set
Proof. (a) Since Dom(M ) is closed under ⊕M , ⊗M , NM , it is closed under ⇒M . Because X is finite and Dom(M ) is a lattice, the finite meet and join exist in Dom(M ). Hence µRM (A) (x), µRM (A) (x) ∈
Dom(M ) for all x, so both are U-Sets.
(b) Fix x ∈ X . Using NM (1M ) = 0M , 0M ⊕M a = a, and 1M ⊗M a = a,
∆M (x, x) ⇒M µA (x) = 1M ⇒M µA (x) = NM (1M ) ⊕M µA (x) = µA (x).
For y 6= x, ∆M (x, y) = 0M , so
∆M (x, y) ⇒M µA (y) = 0M ⇒M µA (y) = NM (0M ) ⊕M µA (y) = 1M .
Therefore the meet over all y ∈ X equals µA (x):


^

µ∆M (A) (x) = µA (x) ⊗M
1M = µA (x).
y6=x
Similarly,
µ∆M (A) (x) =
_
∆M (x, y) ⊗M µA (y)

y∈X
_


= 1M ⊗M µA (x) ⊕M
(0M ⊗M µA (y)) = µA (x).
y6=x
(c) In the Boolean case, a ⇒M b = ¬a ∨ b. Thus
µRM (A) (x) =
^

¬RM (x, y) ∨ µA (y) = 1
y∈X
iff for all y , RM (x, y) = 1 implies µA (y) = 1, i.e. the E -equivalence class of x is contained in A.
Also
_

µRM (A) (x) =
RM (x, y) ∧ µA (y) = 1
y∈X
iff there exists y in the E -class of x with y ∈ A, i.e. the class intersects A. These are exactly
Pawlak’s lower and upper approximations; the region identities follow by the Boolean specialization of the pointwise definitions of POSM , NEGM , BNDM .
(d) Substituting ⊕M = max, ⊗M = min, NM (a) = 1 − a into (a) gives RM (x, y) ⇒M µA (y) =
max(1 − RM (x, y), µA (y)), and the meet/join over finite X become inf / sup, yielding the stated
fuzzy-rough formulas.
(e) With componentwise max / min and NM (t, i, f ) = (f, 1 − i, t), the operations in Definition 3.6.5 act componentwise on [0, 1]3 , so the resulting lower/upper approximations are singlevalued neutrosophic sets obtained by the same rough-approximation scheme in the neutrosophic
degree-domain.


# Page. 119

![Page Image](https://bcdn.docswell.com/page/4JZL6LM9E3.jpg)

Chapter 3. Uncertain Rough Set
3.7
118
Functorial Rough Set
A functorial set is a categorical device for organizing a family of sets that vary across contexts
(objects) together with coherent transport maps along context-changes (morphisms). Concretely,
it consists of a category equipped with a covariant functor to Set [281]. A functorial rough set
then equips each fiber F (X) with an indiscernibility relation and forms Pawlak-style lower/upper
approximations objectwise, producing a family of rough approximation pairs indexed by Ob(C).
Definition 3.7.1 (Functorial set). [281] Let C be a category and let
F : C −→ Set
be a covariant functor. The pair (C, F ) is called a functorial set. For each object X ∈ Ob(C),
the set F (X) is interpreted as the collection of F -structures attached to X . Every morphism
f : X → Y induces a structure-preserving map
F (f ) : F (X) −→ F (Y ),
such that F (idX ) = idF (X) and
F (g ◦ f ) = F (g) ◦ F (f )
for all composable morphisms f, g in C .
Definition 3.7.2 (Functorial rough approximation system). Let C be a category and let F :
C → Set be a covariant functor. A functorial rough approximation system is a triple
(C, F, R),
where R assigns to each object X ∈ Ob(C) an equivalence relation
RX ⊆ F (X) × F (X).
For x ∈ F (X) we write
[x]RX := { y ∈ F (X) | (x, y) ∈ RX }
for the RX -equivalence class of x.
Compatibility along morphisms (often assumed). In many applications one additionally
requires that every morphism f : X → Y is relation-compatible:

(x, x0 ) ∈ RX =⇒ F (f )(x), F (f )(x0 ) ∈ RY .
This expresses that indiscernibility is preserved under the transport F (f ).
Definition 3.7.3 (Functorial rough set). Let (C, F, R) be as in Definition 3.7.2. A functorial
rough set is a choice of a family of target subsets
A = {AX ⊆ F (X)}X∈Ob(C) .
For each object X , define the Pawlak lower and upper approximations of AX with respect to
RX by


AX := x ∈ F (X) [x]RX ⊆ AX ,
AX := x ∈ F (X) [x]RX ∩ AX 6= ∅ .


# Page. 120

![Page Image](https://bcdn.docswell.com/page/YE6W2WNDEV.jpg)

119
Chapter 3. Uncertain Rough Set
The induced functorial rough approximation of A is the object-indexed family

FRSR (A) := (AX , AX ) X∈Ob(C) .
Remark. The family A = {AX } may be specified independently at each object (e.g., labels
collected at different sites). If one wishes to enforce cross-context consistency, a natural condition
is F (f )(AX ) ⊆ AY for morphisms f : X → Y , i.e., A is a subfunctor of F .
Example 3.7.4 (Cross-store risk labeling as a functorial rough set). Let C be the small category
with two objects X, Y and morphisms
Mor(C) = {idX , idY , f : X → Y },
with the usual identities and compositions.
Define a functor F : C → Set by
F (X) = {p1 , p2 , p3 , p4 },
F (Y ) = {q1 , q2 , q3 },
and on the non-identity arrow f by
F (f ) : F (X) → F (Y ),
F (f )(p1 ) = q1 , F (f )(p2 ) = q1 , F (f )(p3 ) = q2 , F (f )(p4 ) = q3 .
Equip each fiber with an indiscernibility relation:
RX : {p1 , p2 } form one class and {p3 , p4 } form one class,
RY : {q1 } is a class and {q2 , q3 } is a class.
Equivalently,
[p1 ]RX = [p2 ]RX = {p1 , p2 },
[q1 ]RY = {q1 },
[p3 ]RX = [p4 ]RX = {p3 , p4 },
[q2 ]RY = [q3 ]RY = {q2 , q3 }.
Moreover, F (f ) is compatible with the relations: if u RX v then F (f )(u) RY F (f )(v).
Now define the target subsets (e.g., items flagged as “high return-risk”):
AX = {p2 , p3 } ⊆ F (X),
AY = {q3 } ⊆ F (Y ).
At X . Since neither [p1 ]RX = {p1 , p2 } nor [p3 ]RX = {p3 , p4 } is contained in AX ,
AX = ∅.
Both classes intersect AX , so
AX = F (X).
At Y . Here [q1 ]RY = {q1 } and [q2 ]RY = [q3 ]RY = {q2 , q3 }, hence
AY = ∅ ,
Therefore,
AY = {q2 , q3 }.


FRSR (A) = (AX , AX ), (AY , AY ) = (∅, F (X)), (∅, {q2 , q3 }) .
This captures, objectwise, how “definite” and “possible” high-risk items differ between the two
stores.


# Page. 121

![Page Image](https://bcdn.docswell.com/page/GE5M2M58E4.jpg)

Chapter 3. Uncertain Rough Set
3.8
120
Near Rough Set
Near sets are collections of objects considered close when at least one pair shares sufficiently
similar descriptions under probe functions [282–285]. Near rough sets approximate a target
using tolerance classes from descriptive nearness, yielding lower definite and upper possible
regions respectively [286–288].
Definition 3.8.1 (Near rough set (tolerance-cover rough set)). Let O 6= ∅ be a (finite) universe
of objects and let B = {ϕ1 , . . . , ϕm } be a finite family of probe functions ϕi : O → R describing
observable features. Fix p ∈ [1, ∞] and a tolerance level ε ≥ 0, and define the descriptive
tolerance relation ≈B,ε ⊆ O × O by

x ≈B,ε y ⇐⇒
ΦB (x) − ΦB (y) p ≤ ε,
ΦB (x) := ϕ1 (x), . . . , ϕm (x) ∈ Rm .
A subset A ⊆ O is called an ≈B,ε -preclass if
∀x, y ∈ A,
x ≈B,ε y.
A tolerance class is a maximal ≈B,ε -preclass (w.r.t. inclusion). Let HBε (O) denote the family of
all tolerance classes; then HBε (O) is a covering of O.
For any target set X ⊆ O, define the near lower and near upper approximations by
[
[
B∗ε (X) :=
A ∈ HBε (O)
A⊆X ,
Bε∗ (X) :=
A ∈ HBε (O)
A ∩ X 6= ∅ .
The ordered pair
B∗ε (X), Bε∗ (X)

is called the near rough set (or tolerance-cover rough set) of X induced by (O, B, ε, p). Its regions
are
POSεB (X) := B∗ε (X),
BNDεB (X) := Bε∗ (X) \ B∗ε (X),
NEGεB (X) := O \ Bε∗ (X).
We say that X is rough (under (O, B, ε, p)) if BNDεB (X) 6= ∅, and (cover-)definable otherwise.
Example 3.8.2 (Near rough set for product-return risk using tolerance-cover classes). Let O =
{o1 , o2 , o3 , o4 , o5 } be five online purchases. We describe each purchase by two probe functions
B = {ϕ1 , ϕ2 },
where ϕ1 (o) is the delivery-time deviation (days late, in days) and ϕ2 (o) is the item-price (in
USD). Assume the observed values are:
o ϕ1 (o) (days) ϕ2 (o) (USD)
o1
1.0
100
o2
1.5
98
o3
4.0
105
o4
4.5
108
o5
7.0
160
Fix p = 2 and tolerance ε = 3. Then the descriptive tolerance relation is
x ≈B,ε y
⇐⇒
ΦB (x) − ΦB (y) 2 ≤ 3,


# Page. 122

![Page Image](https://bcdn.docswell.com/page/972949NVJR.jpg)

121
Chapter 3. Uncertain Rough Set
ΦB (o) = (ϕ1 (o), ϕ2 (o)) ∈ R2 .
With this choice, one checks:
kΦB (o1 ) − ΦB (o2 )k2 =
q
(0.5)2 + (2)2 ≈ 2.06 ≤ 3,
kΦB (o3 ) − ΦB (o4 )k2 =
q
(0.5)2 + (3)2 ≈ 3.04 &gt; 3,
but by adjusting ε slightly (e.g., ε = 3.1) these two become near; for concreteness, keep ε = 3.1
below. Also,
q
kΦB (o1 ) − ΦB (o3 )k2 =
(3)2 + (5)2 ≈ 5.83 &gt; 3.1,
q
kΦB (o4 ) − ΦB (o5 )k2 = (2.5)2 + (52)2  3.1.
Thus the maximal ≈B,ε -preclasses (tolerance classes) can be taken as

HBε (O) = A1 , A2 , A3 ,
A1 = {o1 , o2 }, A2 = {o3 , o4 }, A3 = {o5 }.
(Indeed, objects within each Ai are pairwise near, and no Ai can be enlarged without breaking
nearness.)
Let X ⊆ O be the set of purchases that were actually returned:
X = {o2 , o3 }.
Then the near lower and near upper approximations are
[
B∗ε (X) = {A ∈ HBε (O) | A ⊆ X} = ∅,
since no tolerance class is fully contained in {o2 , o3 }, whereas
Bε∗ (X) =
[
{A ∈ HBε (O) | A ∩ X 6= ∅} = A1 ∪ A2 = {o1 , o2 , o3 , o4 }.
Hence the induced regions are
POSεB (X) = ∅,
BNDεB (X) = {o1 , o2 , o3 , o4 },
NEGεB (X) = {o5 }.
Interpretation: under tolerance (B, ε, p), returns cannot be asserted definitely for any class, but
any purchase near a returned one (in delay–price space) becomes possibly returned (boundary),
while o5 is definitely outside the returned concept.
3.9 Z-Rough Set
A Z-number is an ordered pair of fuzzy numbers: one represents a fuzzy restriction on a variable’s value, and the other represents a fuzzy assessment of the reliability (credibility) of that
restriction [18]. Related concepts include intuitionistic fuzzy Z-numbers [289–291] and neutrosophic Z-numbers [292–295]. Z-rough sets define lower and upper approximations when objects
have Z-valued memberships, combining value uncertainty with reliability information during
granulation.


# Page. 123

![Page Image](https://bcdn.docswell.com/page/DJY4M4PQ7M.jpg)

Chapter 3. Uncertain Rough Set
122
Definition 3.9.1 (Z-number). [18] Let F̃(R) denote the family of fuzzy numbers on R, i.e.,
(normalized) fuzzy sets with membership functions µÃ : R → [0, 1]. A Z-number is an ordered
pair
Z = (Ã, R̃) ∈ F̃(R) × F̃([0, 1]),
where Ã is a fuzzy restriction on the (unknown) value of a variable, and R̃ is a fuzzy restriction
expressing the reliability (credibility) of Ã.
Definition 3.9.2 (Lattice of Z-numbers). Let Z := F̃(R) × F̃([0, 1]). Define a partial order 
on Z by
(Ã1 , R̃1 )  (Ã2 , R̃2 )
⇐⇒
µÃ1 (t) ≤ µÃ2 (t) (∀t ∈ R) and µR̃1 (s) ≤ µR̃2 (s) (∀s ∈ [0, 1]).
Define meet and join (componentwise) by
(Ã1 , R̃1 ) ∧ (Ã2 , R̃2 ) := (Ã1 ∩ Ã2 , R̃1 ∩ R̃2 ),
(Ã1 , R̃1 ) ∨ (Ã2 , R̃2 ) := (Ã1 ∪ Ã2 , R̃1 ∪ R̃2 ),
where for fuzzy sets B̃1 , B̃2 we use the standard operations µB̃1 ∩B̃2 = min(µB̃1 , µB̃2 ) and
µB̃1 ∪B̃2 = max(µB̃1 , µB̃2 ) (pointwise). Then (Z, , ∧, ∨) is a complete lattice (as a product
of complete lattices).
Definition 3.9.3 (Z-valued set). Let U 6= ∅ be a universe. A Z-valued set (briefly, a Z-set) on
U is a mapping
A : U −→ Z.
Definition 3.9.4 (Z-rough lower/upper approximations). Let (U, R) be a Pawlak approximation
space, i.e., R ⊆ U × U is an equivalence relation, and write [x]R := {y ∈ U | (x, y) ∈ R}. Let
A : U → Z be a Z-set. Define the Z-rough lower and Z-rough upper approximations of A w.r.t.
R by the Z-sets R(A), R(A) : U → Z given for each x ∈ U by
^
_
R(A)(x) :=
A(y),
R(A)(x) :=
A(y),
y∈[x]R
y∈[x]R
where ∧, ∨ are the lattice operations on Z .
Definition 3.9.5 (Z-rough set). Under the assumptions above, the ordered pair

R(A), R(A)
is called the Z-rough set induced by A (with respect to R). If R(A) 6= R(A), then A is said to
be Z-rough (not exactly definable) under R.
Example 3.9.6 (Z-rough set for fever screening with uncertain readings). Consider a hospital
triage desk that measures patients’ body temperatures using two different infrared thermometers
(devices). Let
U = {p1 , p2 , p3 , p4 }


# Page. 124

![Page Image](https://bcdn.docswell.com/page/V7NYWY52E8.jpg)

123
Chapter 3. Uncertain Rough Set
be the set of patients. Define an equivalence relation R on U by
(pi , pj ) ∈ R
⇐⇒
pi and pj were measured by the same device.
Assume the two device-classes are
[p1 ]R = [p2 ]R = {p1 , p2 },
[p3 ]R = [p4 ]R = {p3 , p4 }.
Let Tri(a, b, c) denote the (triangular) fuzzy number on R with membership


0,
t ≤ a,




t−a



, a &lt; t ≤ b,
a
µTri(a,b,c) (t) = bc −
−t


, b &lt; t &lt; c,



c−b


 0,
t ≥ c,
and similarly for fuzzy numbers on [0, 1].
Define a Z-set A : U → Z = F̃(R) × F̃([0, 1]) by assigning to each patient a Z-number
A(pi ) = (T̃i , r̃i ),
where T̃i is a fuzzy restriction on the (unknown) temperature (in ◦ C) and r̃i is a fuzzy restriction
on reliability. For example, set

A(p1 ) = Tri(37.5, 38.0, 38.6), Tri(0.70, 0.80, 0.90) ,

A(p2 ) = Tri(37.3, 37.8, 38.4), Tri(0.55, 0.65, 0.75) ,

A(p3 ) = Tri(36.8, 37.2, 37.7), Tri(0.80, 0.90, 1.00) ,

A(p4 ) = Tri(37.0, 37.4, 38.0), Tri(0.60, 0.75, 0.90) .
Using the product-lattice operations on Z (componentwise ∧, ∨), the Z-rough lower and upper
approximations are, for x ∈ U ,
^
_
R(A)(x) =
A(y),
R(A)(x) =
A(y).
y∈[x]R
y∈[x]R
Hence, explicitly on each equivalence class,

R(A)(p1 ) = R(A)(p2 ) = A(p1 ) ∧ A(p2 ) = T̃1 ∩ T̃2 , r̃1 ∩ r̃2 ,

R(A)(p1 ) = R(A)(p2 ) = A(p1 ) ∨ A(p2 ) = T̃1 ∪ T̃2 , r̃1 ∪ r̃2 ,

R(A)(p3 ) = R(A)(p4 ) = A(p3 ) ∧ A(p4 ) = T̃3 ∩ T̃4 , r̃3 ∩ r̃4 ,

R(A)(p3 ) = R(A)(p4 ) = A(p3 ) ∨ A(p4 ) = T̃3 ∪ T̃4 , r̃3 ∪ r̃4 ,
where, for fuzzy sets B̃1 , B̃2 on the same domain,
µB̃1 ∩B̃2 (t) = min{µB̃1 (t), µB̃2 (t)},
µB̃1 ∪B̃2 (t) = max{µB̃1 (t), µB̃2 (t)}.
Interpretation. Within each device-class, R(A) gives a conservative (worst-case) Z-description
of temperature and reliability shared by all patients measured by that device, while R(A) gives
a permissive (best-case) Z-description capturing any plausible fever indication in that class.


# Page. 125

![Page Image](https://bcdn.docswell.com/page/YJ9PXPYD73.jpg)

Chapter 3. Uncertain Rough Set
124
3.10 D-Rough Set
D-number assigns masses to nonempty subsets of a frame, allowing incomplete evidence because
total mass may be below one overall [296–298]. D-rough sets compute lower and upper approximations when object membership is described by D-numbers, aggregating evidence-based
uncertainty across granules consistently.
Definition 3.10.1 (D-number). Let Θ be a nonempty finite set (called a frame of discernment).
A D-number on Θ is a mapping
D : 2Θ −→ [0, 1]
satisfying
D(∅) = 0,
X
D(B) ≤ 1.
B⊆Θ
P
The quantity 1 − B⊆Θ D(B) represents incomplete (unassigned) evidence. Denote by D(Θ)
the set of all D-numbers on Θ.
Definition 3.10.2 (D-rough set (D-number-valued rough description)). Let U be a nonempty
finite universe and let R ⊆ U × U be an equivalence relation. For x ∈ U , write
[x]R := { y ∈ U | (x, y) ∈ R }.
Fix the two-element frame Θ := {+, −}, where “+” means “in X ” and “−” means “not in X ”.
For any (crisp) subset X ⊆ U , define a mapping
DR,X : U −→ D(Θ)
by assigning to each x ∈ U the D-number DR,X (x) given by
DR,X (x)({+}) :=
|[x]R ∩ X|
,
|[x]R |
DR,X (x)({−}) :=
|[x]R \ X|
,
|[x]R |
and

DR,X (x)(B) := 0
for all B ⊆ Θ with B ∈
/ {+}, {−} .
P
Then DR,X (x)(∅) = 0 and B⊆Θ DR,X (x)(B) = 1 for all x, hence DR,X (x) is a D-number.
The mapping DR,X (equivalently, the triple (U, R, DR,X )) is called the D-rough set (or D-numbervalued rough description) of X induced by (U, R).
Example 3.10.3 (D-rough set for loan-approval tendency). Consider a bank that groups past
applicants into indiscernibility classes according to a coarse risk profile (e.g., the pair “employment stability” × “debt-to-income band”). Let
U = {u1 , u2 , u3 , u4 , u5 , u6 }
be six past applicants, and let R be the equivalence relation on U that yields the following
classes:
[u1 ]R = [u2 ]R = {u1 , u2 },
[u3 ]R = [u4 ]R = [u5 ]R = {u3 , u4 , u5 },
[u6 ]R = {u6 }.


# Page. 126

![Page Image](https://bcdn.docswell.com/page/GJ8D2D65JD.jpg)

125
Chapter 3. Uncertain Rough Set
Let X ⊆ U be the set of applicants that were approved:
X = {u1 , u3 , u4 }.
Fix Θ = {+, −}, where “+” means “approved (in X )” and “−” means “not approved (not
in X )”. For each x ∈ U , the D-rough set mapping DR,X : U → D(Θ) assigns the empirical
proportions inside the R-class of x:
DR,X (x)({+}) =
|[x]R ∩ X|
,
|[x]R |
DR,X (x)({−}) =
|[x]R \ X|
.
|[x]R |
Hence,
DR,X (u1 ) = DR,X (u2 ) : {+} 7→ 12 , {−} 7→ 12 ,
DR,X (u3 ) = DR,X (u4 ) = DR,X (u5 ) : {+} 7→ 23 , {−} 7→ 13 ,
DR,X (u6 ) : {+} 7→ 0, {−} 7→ 1.
Interpreting DR,X (x)({+}) as the data-supported tendency of approval within x’s risk-profile
class, we can induce (α, β)-approximations. For example, take α = 0.6 and β = 0.4:
aprαD (X) = {x ∈ U | DR,X (x)({+}) ≥ 0.6} = {u3 , u4 , u5 },
aprβD (X) = {x ∈ U | DR,X (x)({+}) &gt; 0.4} = {u1 , u2 , u3 , u4 , u5 }.
Therefore the induced regions are
POSD
α,β (X) = {u3 , u4 , u5 },
NEGD
α,β (X) = {u6 },
BNDD
α,β (X) = {u1 , u2 }.
In words, applicants in the second class are positively supported as “approved-like” (high approval
tendency), u6 is negative, and the first class forms a boundary region where approval evidence is
mixed.
Definition 3.10.4 (Induced (α, β)-approximations from a D-rough set). In the setting of Definition 3.10.2, fix thresholds 0 ≤ β &lt; α ≤ 1. Define the (α, β)-lower and (α, β)-upper approximations of X by

aprαD (X) := x ∈ U DR,X (x)({+}) ≥ α ,

aprβD (X) := x ∈ U DR,X (x)({+}) &gt; β .
The associated positive, boundary, and negative regions are
D
POSD
α,β (X) := aprα (X),

NEGD
α,β (X) := x ∈ U | DR,X (x)({+}) ≤ β ,

D
D
BNDD
α,β (X) := U \ POSα,β (X) ∪ NEGα,β (X) .
Theorem 3.10.5 (Reduction to classical (Pawlak) rough sets). Let (U, R) be an approximation
space and X ⊆ U . Let R(X) := {x ∈ U | [x]R ⊆ X} and R(X) := {x ∈ U | [x]R ∩ X 6= ∅} be
the Pawlak lower and upper approximations.
For the D-rough set DR,X of Definition 3.10.2,
apr1D (X) = R(X),
apr0D (X) = R(X).
Proof. For any x ∈ U ,
|[x]R ∩ X|
.
|[x]R |
Thus DR,X (x)({+}) ≥ 1 holds iff |[x]R ∩ X| = |[x]R |, i.e. [x]R ⊆ X . Hence apr1D (X) = R(X).
DR,X (x)({+}) =
Also, DR,X (x)({+}) &gt; 0 holds iff |[x]R ∩ X| &gt; 0, i.e. [x]R ∩ X 6= ∅. Hence apr0D (X) =
R(X).


# Page. 127

![Page Image](https://bcdn.docswell.com/page/LJLM2M93ER.jpg)

Chapter 3. Uncertain Rough Set
126
3.11 Similarity-based rough sets
A similarity-based rough set is a rough-set model that represents uncertainty by using a similarity
relation S and a threshold � to induce tolerance neighborhoods, and then defining the lower
(definite membership) and upper (possible membership) approximations of a target set X [299–
305].
Definition 3.11.1 (Similarity-based (tolerance) rough set). Let U 6= ∅ be a (finite) universe
and let
S : U × U −→ [0, 1]
be a similarity relation (or similarity measure) satisfying at least
S(x, x) = 1 and S(x, y) = S(y, x)
(x, y ∈ U ).
Fix a threshold τ ∈ (0, 1] and define the induced tolerance relation
RS,τ := {(x, y) ∈ U × U | S(x, y) ≥ τ }.
For each x ∈ U , its tolerance neighborhood (class) is
NS,τ (x) := { y ∈ U | (x, y) ∈ RS,τ } = { y ∈ U | S(x, y) ≥ τ }.
For any target set X ⊆ U , define the similarity-based lower and upper approximations by

RS,τ (X) := x ∈ U NS,τ (x) ⊆ X ,
RS,τ (X)
:=

x ∈ U NS,τ (x) ∩ X 6= ∅ .
The ordered pair
RS,τ (X), RS,τ (X)

is called the similarity-based rough set (also: tolerance rough set) of X induced by (U, S, τ ). The
positive, boundary, and negative regions are
POSS,τ (X) := RS,τ (X),
BNDS,τ (X) := RS,τ (X) \ RS,τ (X),
NEGS,τ (X) := U \ RS,τ (X).
Example 3.11.2 (Loan-risk screening via similarity-based (tolerance) rough sets). Let U =
{a, b, c, d} be four loan applicants. Assume S(x, y) ∈ [0, 1] is computed from standardized
applicant-feature vectors (e.g., income, debt-to-income ratio, employment stability) using a similarity score. Define S : U × U → [0, 1] by the symmetric table
S
a
b
c
d
a
b
c
d
1
0.90 0.70 0.20
0.90
1
0.80 0.30
0.70 0.80
1
0.60
0.20 0.30 0.60
1
so S(x, x) = 1 and S(x, y) = S(y, x) for all x, y ∈ U . Fix the threshold τ = 0.8 and form the
tolerance relation
RS,τ = {(x, y) ∈ U × U | S(x, y) ≥ τ }.


# Page. 128

![Page Image](https://bcdn.docswell.com/page/47MY8YZM7W.jpg)

127
Chapter 3. Uncertain Rough Set
Then the tolerance neighborhoods are
NS,τ (a) = {a, b},
NS,τ (b) = {a, b, c},
NS,τ (c) = {b, c},
NS,τ (d) = {d}.
Let X = {a, b} ⊆ U be the set of applicants labeled as high-risk by historical outcomes (e.g.,
later default/charge-off). The similarity-based lower and upper approximations are
RS,τ (X) = {x ∈ U | NS,τ (x) ⊆ X} = {a},
RS,τ (X) = {x ∈ U | NS,τ (x) ∩ X 6= ∅} = {a, b, c}.
Hence the induced regions are
POSS,τ (X) = {a},
BNDS,τ (X) = {b, c},
NEGS,τ (X) = {d}.
In words, a is definitely high-risk (all sufficiently similar applicants are in X ), c is possibly
high-risk (similar to b ∈ X ), and d is definitely not high-risk at level τ .


# Page. 129

![Page Image](https://bcdn.docswell.com/page/P7R9596LE9.jpg)



# Page. 130

![Page Image](https://bcdn.docswell.com/page/PJXQKQD67X.jpg)

Chapter 4
Some Related Concepts for Rough Sets
In this chapter, we present some related concepts for rough sets.
4.1 Rough Graph
A rough graph models edge uncertainty by grouping edges via an equivalence relation and approximating a target edge set with lower and upper subgraphs [306–309]. Related notions include
fuzzy rough graphs [310, 311], soft rough graphs [312, 313], neutrosophic rough graphs [314, 315],
rough hypergraphs, and rough superhypergraphs [316].
Definition 4.1.1 (Rough Graph). [317, 318] Let G = (V, E) be a graph where V is the set
of vertices and E is the set of edges. Let R be an attribute set on E , inducing an equivalence
relation on the edges. For any edge set X ⊆ E , the lower approximation of X with respect to
R (denoted R(X)) is defined as:
R(X) = {e ∈ E | [e]R ⊆ X},
where [e]R denotes the equivalence class of e under R. The upper approximation of X (denoted
R(X)) is defined as:
R(X) = {e ∈ E | [e]R ∩ X 6= ∅}.
A graph G = (V, E) is called an R-rough graph if X is not exactly definable under R, and it
is characterized by the pair (R(X), R(X)), where R(X) is the lower approximation graph and
R(X) is the upper approximation graph.
Example 4.1.2 (Traffic monitoring with coarse road-type data: a rough graph). Consider a
small road network in which vertices are intersections and edges are road segments. Let
V = {A, B, C, D},
E = {e1 = AB, e2 = BC, e3 = CD, e4 = AD}.
Assume the traffic center does not observe congestion per individual segment; instead, it only
receives aggregated reports by road type. Define an edge attribute map
R : E → {Arterial, Residential},
R(e1 ) = R(e2 ) = Arterial, R(e3 ) = R(e4 ) = Residential.
129


# Page. 131

![Page Image](https://bcdn.docswell.com/page/3JK959XGJD.jpg)

Chapter 4. Some Related Concepts for Rough Sets
130
This induces an equivalence relation on edges: e ∼R e0 ⇐⇒ R(e) = R(e0 ), with classes
[e1 ]R = {e1 , e2 },
[e3 ]R = {e3 , e4 }.
Suppose a noisy/aggregated sensor report flags the set of “congested” edges as
X = {e1 , e3 } ⊆ E,
meaning: there is evidence of congestion on at least one arterial segment and at least one
residential segment, but the system cannot distinguish which segment within each type is congested.
Using standard rough-set notation on edges, define
R(X) := {e ∈ E | [e]R ⊆ X},
R(X) := {e ∈ E | [e]R ∩ X 6= ∅}.
Then
R(X) = ∅ because neither [e1 ]R = {e1 , e2 } ⊆ X nor [e3 ]R = {e3 , e4 } ⊆ X,
while
R(X) = E
because [e1 ]R ∩ X = {e1 } 6= ∅ and [e3 ]R ∩ X = {e3 } 6= ∅.
Hence the congestion information is not exactly definable under the road-type granulation: the
definitely congested subgraph (lower approximation) is empty, whereas the possibly congested
subgraph (upper approximation) is the entire road graph. This is a concrete real-life instance of
an R-rough graph caused by coarse/aggregated sensing.
4.2
Rough topological spaces
Rough topology, induced by an equivalence relation, takes open sets as complements of upperapproximation-closed sets on U , forming a space [319–322].
Definition 4.2.1 (Rough topology and rough topological space). Let (U, R) be a Pawlak approximation space. The rough topology induced by R is

τR := O ⊆ U R(Oc ) = Oc ,
Oc := U \ O.
Equivalently, τR = { U \ C | C ⊆ U, R(C) = C }, i.e. the open sets are complements of R-closed
sets. The pair (U, τR ) is called a rough topological space (induced by R).
Example 4.2.2 (Coarse location privacy induces a rough topology). A mobile app may not
store exact GPS points, but only a coarse region label (for privacy). Let U be a finite set of
possible exact micro-locations (e.g., grid cells):
U = {`1 , `2 , `3 , `4 , `5 , `6 }.
Assume the phone reports only which coarse region a location belongs to, inducing an indiscernibility (equivalence) relation R with blocks
B1 = {`1 , `2 } (“Station area”),
B2 = {`3 , `4 } (“Mall area”),
B3 = {`5 , `6 } (“Park area”).


# Page. 132

![Page Image](https://bcdn.docswell.com/page/LE3WKW55E5.jpg)

131
Chapter 4. Some Related Concepts for Rough Sets
For a set C ⊆ U , the Pawlak upper approximation is
R(C) = { u ∈ U | [u]R ∩ C 6= ∅ } =
[
{ Bi | Bi ∩ C 6= ∅ }.
The rough topology τR consists of all sets O ⊆ U whose complement is R-closed:
τR = { O ⊆ U | R(Oc ) = Oc }.
In this context, O ∈ τR means that “not O” is fully decidable from the coarse region label.
Concrete open set. Let
O := B1 ∪ B2 = {`1 , `2 , `3 , `4 },
interpreted as “the user is in the commercial zone (station or mall).” Then
Oc = B3 = {`5 , `6 },
R(Oc ) = R(B3 ) = B3 = Oc ,
so O ∈ τR . Practically: if the phone reports Park area, we can conclude the user is not in the
commercial zone, without ambiguity.
A non-open set (not observable at this granularity). Let O0 := {`1 , `3 }, which mixes two
coarse regions. Then
(O0 )c = {`2 , `4 , `5 , `6 },
R((O0 )c ) = B1 ∪ B2 ∪ B3 = U 6= (O0 )c ,
so O0 ∈
/ τR . Practically: with only coarse labels, the event “`1 or `3 ” cannot be separated from
its complement in a topologically consistent way.
Thus, (U, τR ) models which location events are operationally “open/observable” under privacydriven coarse sensing: exactly the unions of indiscernibility classes.
Proposition 4.2.3 (Interior and closure coincide with rough approximations). In the rough
topological space (U, τR ), the topological closure and interior satisfy
clτR (A) = R(A),
intτR (A) = R(A)
(A ⊆ U ).
Proof. Since clR = R is a Kuratowski closure operator, the induced topology has clτR = clR = R
by construction. For interior, using int(A) = U \ cl(U \ A),
intτR (A) = U \ R(U \ A) = { x ∈ U | [x]R ∩ (U \ A) = ∅ } = { x ∈ U | [x]R ⊆ A } = R(A).
Remark 4.2.4 (Concrete description of τR ). A set O ⊆ U is in τR if and only if it is a union of
R-equivalence classes:
[
O ∈ τR ⇐⇒ (∀x ∈ O) [x]R ⊆ O ⇐⇒ O =
[x]R .
x∈O
Thus (U, τR ) is precisely the quotient (saturation) topology determined by the partition U /R.


# Page. 133

![Page Image](https://bcdn.docswell.com/page/8EDK3KDY7G.jpg)

Chapter 4. Some Related Concepts for Rough Sets
4.3
132
Rough group
A rough group approximates a subgroup by lower and upper sets under an equivalence relation,
with operations consistent modulo boundaries [323–326].
Definition 4.3.1 (Rough group). Let K = (U, R) be an approximation space and let ∗ :
U × U → U be a binary operation. A nonempty subset G ⊆ U is called a rough group (in K ) if
the following hold:
(1) (Closure up to upper approximation) For all x, y ∈ G, we have x ∗ y ∈ G.
(2) (Associativity on the upper approximation) For all a, b, c ∈ G,
(a ∗ b) ∗ c = a ∗ (b ∗ c).
(3) (Rough identity) There exists an element e ∈ G such that for all x ∈ G,
x ∗ e = e ∗ x = x.
The element e is called a rough identity element of G.
(4) (Rough inverse) For every x ∈ G, there exists an element y ∈ G such that
x ∗ y = y ∗ x = e.
Such a y is called a rough inverse element of x (in G).
Example 4.3.2 (A rough group from coarse phase sensing). Consider a rotating machine whose
controller tracks a discrete phase
U = Z8 = {0, 1, 2, 3, 4, 5, 6, 7},
with the group operation given by addition modulo 8:
x ∗ y := x + y
(mod 8).
A low-cost sensor may only report whether the phase is even or odd. This induces the indiscernibility relation R on U defined by parity:
xRy
⇐⇒
x≡y
(mod 2),
so the equivalence classes are
[0]R = {0, 2, 4, 6},
[1]R = {1, 3, 5, 7}.
Let the set of calibrated acceptable phases be
G := {0, 2, 6} ⊆ U.
In the approximation space K = (U, R), the upper approximation of a set A ⊆ U is
[
A := {[x]R | [x]R ∩ A 6= ∅}.


# Page. 134

![Page Image](https://bcdn.docswell.com/page/V7PK4KX2J8.jpg)

133
Chapter 4. Some Related Concepts for Rough Sets
Hence
G=
[
{[x]R | [x]R ∩ G 6= ∅} = [0]R = {0, 2, 4, 6}.
Claim. G is a subgroup of (U, ∗) = (Z8 , + mod 8). In particular, G is a rough group (its group
laws hold up to the upper approximation G).
Verification.
(1) Closure. If a, b ∈ G, then a and b are even; hence a + b is even, so
a ∗ b ∈ {0, 2, 4, 6} = G.
(2) Associativity. For all a, b, c ∈ U (hence for all a, b, c ∈ G),
(a ∗ b) ∗ c = (a + b) + c ≡ a + (b + c) = a ∗ (b ∗ c)
(mod 8).
(3) Identity. The element e := 0 belongs to G, and for every a ∈ G,
a ∗ e = a,
e ∗ a = a.
(4) Inverses. For a ∈ G, its inverse in Z8 is −a (mod 8), which is again even and thus lies in
G. Concretely, within G = {0, 2, 4, 6},
0−1 = 0,
2−1 = 6,
4−1 = 4,
6−1 = 2,
and in each case a ∗ a−1 = a−1 ∗ a = 0 = e.
Therefore (G, ∗) is a subgroup of (U, ∗). The set G itself need not be a subgroup (indeed
2∗2 = 4 ∈
/ G), but it is group-like up to sensing resolution: products and inverses of elements
of G are guaranteed to lie in the sensor-induced upper approximation G.
Definition 4.3.3 (Rough subgroup). Let G ⊆ U be a rough group in an approximation space
K = (U, R). A nonempty subset H ⊆ G is called a rough subgroup of G if H is a rough group
with respect to the same operation ∗ (and with approximations taken in the same space K ).
Remark 4.3.4. If G = G (i.e., G is R-definable), then the above axioms reduce to the usual
group axioms on G, so every ordinary group is a special case of a rough group.


# Page. 135

![Page Image](https://bcdn.docswell.com/page/2JVVXVLXJQ.jpg)

Chapter 4. Some Related Concepts for Rough Sets
134
Example 4.3.5 (A concrete rough subgroup in a cyclic-time system). Consider a system whose
internal state is a cyclic time stamp modulo 12 (e.g., a 12-hour clock). Let
U := Z12 = {0, 1, . . . , 11},
(mod 12).
x ∗ y := x + y
Thus (U, ∗) is the usual cyclic group.
Coarsened observation (indiscernibility). Assume the logging mechanism only records time
in 4-hour buckets, so times congruent modulo 4 are indistinguishable. Define an equivalence
relation R on U by
(x, y) ∈ R ⇐⇒ x ≡ y (mod 4).
Then the R-classes are
[0]R = {0, 4, 8},
[1]R = {1, 5, 9},
[2]R = {2, 6, 10},
[3]R = {3, 7, 11}.
A rough group of “allowed start times”. Suppose operational policy declares that a certain
action is scheduled at exactly 0 or 4 (mod 12), so we set
G := {0, 4} ⊆ U.
Because of the 4-hour coarsening, the upper approximation of G is
G = {x ∈ U | [x]R ∩ G 6= ∅} = [0]R = {0, 4, 8}.
Then G is a rough group in K = (U, R):
• (Closure up to upper approximation) For x, y ∈ G, one has
0 ∗ 0 = 0,
0 ∗ 4 = 4,
4 ∗ 0 = 4,
4 ∗ 4 = 8,
hence x ∗ y ∈ G = {0, 4, 8}.
• (Associativity on G) holds since ∗ is associative on all of U .
• (Rough identity) e := 0 ∈ G satisfies x ∗ e = e ∗ x = x for all x ∈ G.
• (Rough inverses) 0 has rough inverse 0, and 4 has rough inverse 8 ∈ G because 4 ∗ 8 =
8 ∗ 4 = 0 in Z12 .
A rough subgroup. Let
H := {4} ⊆ G.
Its upper approximation is the same bucket:
H = {x ∈ U | [x]R ∩ H 6= ∅} = [4]R = {0, 4, 8}.
Then H is a rough subgroup of G, because H is itself a rough group in the same approximation
space (U, R):
• 4 ∗ 4 = 8 ∈ H (closure up to H ),
• associativity holds on H ,
• e = 0 ∈ H is a rough identity for H (since 4 ∗ 0 = 0 ∗ 4 = 4),
• 8 ∈ H is a rough inverse of 4 (since 4 ∗ 8 = 8 ∗ 4 = 0).
Note that H is not a subgroup in the classical sense (it does not contain 0), but it is a rough
subgroup because identity/inverses are permitted to lie in the upper approximation.


# Page. 136

![Page Image](https://bcdn.docswell.com/page/5EGLVLXRJL.jpg)

135
Chapter 4. Some Related Concepts for Rough Sets
4.4 Rough Matroids
A rough matroid links rough-set approximations with matroid independence, defining independent sets through definable subsets induced by relations or coverings [327–329].
Definition 4.4.1 (Parametric rough matroid (rough matroid induced by (U, R) w.r.t. X )). Let
(U, R) be a Pawlak approximation space and fix a parameter subset X ⊆ U . Define a set family
IX := { I ⊆ U | R(I) ⊆ X }.
The matroid
MX := (U, IX )
is called the parametric matroid of the rough set (or a parametric rough matroid) with respect
to X .
Example 4.4.2 (Parametric rough matroid: audit sampling under supplier-indiscernibility).
Let U be a finite set of invoice records:
U = {a1 , a2 , b1 , b2 , c1 },
where a• are invoices from supplier A, b• from supplier B , and c1 from supplier C . Assume our
information system only distinguishes the supplier, so we use the equivalence relation R on U
given by
x R y ⇐⇒ x and y are issued by the same supplier.
Hence

U /R = {a1 , a2 }, {b1 , b2 }, {c1 } .
Suppose the compliance team has already cleared suppliers A and C , so the parameter set is
X := {a1 , a2 , c1 } ⊆ U.
Define the independent family
IX = { I ⊆ U | R(I) ⊆ X },
MX = (U, IX ).
Interpretation. The lower approximation R(I) is the union of those supplier-blocks fully
contained in I . Thus R(I) ⊆ X means: whenever the audit sample I contains all invoices of
a supplier, that supplier must be cleared (i.e., belong to X ). So we may fully audit supplier A
(and C ), but we must not select all invoices of the uncleared supplier B .
Concrete sets.
• I1 = {a1 , a2 , c1 , b1 } is independent: R(I1 ) = {a1 , a2 , c1 } ⊆ X (the B -block is not fully
included).
• I2 = {b1 , b2 } is dependent: R(I2 ) = {b1 , b2 } 6⊆ X (the uncleared supplier-block is fully
included).


# Page. 137

![Page Image](https://bcdn.docswell.com/page/4JQY6Y8Y7P.jpg)

Chapter 4. Some Related Concepts for Rough Sets
136
Definition 4.4.3 (Partition-circuit rough matroid (the case X = ∅)). Let (U, R) be a Pawlak
approximation space. The partition-circuit rough matroid is the matroid
MR := (U, IR ),
IR := { I ⊆ U | R(I) = ∅ }.
Equivalently, its circuit family is exactly the partition into equivalence classes:
C(MR ) = U /R.
Example 4.4.4 (Partition-circuit rough matroid: “never take an entire department”). Let U
be a set of employees assigned to departments:
U = {a1 , a2 , b1 , b2 , c1 },
where a• belong to department A, b• to department B , and c1 to department C . Let R be the
equivalence relation
xRy
⇐⇒
x and y are in the same department,
so that

U /R = {a1 , a2 }, {b1 , b2 }, {c1 } .
Consider the partition-circuit rough matroid
MR = (U, IR ),
IR = { I ⊆ U | R(I) = ∅ }.
Interpretation. Since R(I) is the union of those department-blocks fully contained in I , the
condition R(I) = ∅ means: the selected team I does not completely contain any department.
Equivalently, every department contributes at most |B| − 1 members from each block B ∈ U /R.
Concrete sets and circuits.
• I1 = {a1 , b1 , c1 } is independent, because it contains no full block.
• I2 = {a1 , a2 } is dependent, and it is a circuit (a minimal dependent set), because it is
exactly one equivalence class.
Indeed, the circuit family is precisely the partition:

C(MR ) = U /R = {a1 , a2 }, {b1 , b2 }, {c1 } .


# Page. 138

![Page Image](https://bcdn.docswell.com/page/K74W4W2ZE1.jpg)

137
Chapter 4. Some Related Concepts for Rough Sets
4.5 Soft Rough Graph
Soft rough graph represents a graph via soft set approximations of vertices and edges, yielding
lower and upper subgraphs parameters [312].
Definition 4.5.1 (Soft rough graph). [312] Let G = (V, E) be a simple (undirected) graph and
let A be a nonempty set of parameters. A soft set over V is a mapping F : A → P(V ), and a
soft set over E is a mapping K : A → P(E).
For a target vertex set X ⊆ V , define the soft rough lower and upper vertex approximations

F∗ (X) := v ∈ V ∃a ∈ A : v ∈ F (a) ⊆ X ,

F ∗ (X) := v ∈ V ∃a ∈ A : v ∈ F (a) and F (a) ∩ X 6= ∅ .
Similarly, for a target edge set Y ⊆ E , define the soft rough lower and upper edge approximations

K∗ (Y ) := e ∈ E ∃a ∈ A : e ∈ K(a) ⊆ Y ,

K ∗ (Y ) := e ∈ E ∃a ∈ A : e ∈ K(a) and K(a) ∩ Y 6= ∅ .
For any S ⊆ V , write
E[S] := { uv ∈ E | u ∈ S, v ∈ S }
for the set of edges induced by S . The lower and upper soft rough subgraphs associated with
(X, Y ) are defined by


H∗ (X, Y ) := F∗ (X), K∗ (Y ) ∩ E[F∗ (X)] ,
H ∗ (X, Y ) := F ∗ (X), K ∗ (Y ) ∩ E[F ∗ (X)] ,
so that H∗ (X, Y ) and H ∗ (X, Y ) are subgraphs of G.
A soft rough graph (induced by F, K, A and the targets X ⊆ V , Y ⊆ E ) is the pair

e
G(X,
Y ) := H∗ (X, Y ), H ∗ (X, Y ) .
e
A common choice is Y := E[X] (edges induced by X ), in which case we write G(X)
:=
e
G(X, E[X]). The family of all soft rough graphs of G (over all admissible data) is denoted
by SRG(G).
Example 4.5.2 (Soft rough graph for suspicious users in a small social network). Consider a
small friendship network G = (V, E), where
V = {1, 2, 3, 4, 5}
represents user accounts and
E = {12, 13, 23, 24, 34, 45}
represents undirected friendship links (we write ij for the edge {i, j}).
Let the parameter set be
A = {a1 , a2 , a3 },


# Page. 139

![Page Image](https://bcdn.docswell.com/page/LJ1Y4YMDEG.jpg)

Chapter 4. Some Related Concepts for Rough Sets
138
where a1 = “shares the same device fingerprint as a flagged account”, a2 = “IP-subnet overlap
with flagged accounts”, a3 = “similar posting pattern (burstiness)”.
Define a soft set over vertices F : A → P(V ) by
F (a1 ) = {2, 3},
F (a2 ) = {3, 4},
F (a3 ) = {4, 5}.
Define a soft set over edges K : A → P(E) by selecting the edges inside each F (a):
K(a1 ) = E[F (a1 )] = {23},
K(a2 ) = E[F (a2 )] = {34},
K(a3 ) = E[F (a3 )] = {45}.
Suppose the analysts’ target suspicious vertex set is
X = {3, 4} ⊆ V,
and choose the common target edge set Y := E[X] = {34} ⊆ E .
Soft rough vertex approximations. By Definition 4.5.1,
F∗ (X) = { v ∈ V | ∃a ∈ A : v ∈ F (a) ⊆ X }.
Since F (a2 ) = {3, 4} ⊆ X (while F (a1 ) = {2, 3} * X and F (a3 ) = {4, 5} * X ), we get
F∗ (X) = {3, 4}.
Also,
F ∗ (X) = { v ∈ V | ∃a ∈ A : v ∈ F (a), F (a) ∩ X 6= ∅ }.
Here every F (ai ) intersects X : F (a1 ) ∩ X = {3}, F (a2 ) ∩ X = {3, 4}, F (a3 ) ∩ X = {4}, hence
F ∗ (X) = {2, 3, 4, 5}.
Soft rough edge approximations. For Y = {34},
K∗ (Y ) = { e ∈ E | ∃a ∈ A : e ∈ K(a) ⊆ Y }.
Only K(a2 ) = {34} ⊆ Y , so
K∗ (Y ) = {34}.
Moreover,
K ∗ (Y ) = { e ∈ E | ∃a ∈ A : e ∈ K(a), K(a) ∩ Y 6= ∅ },
and again only K(a2 ) intersects Y , so
K ∗ (Y ) = {34}.
Lower and upper soft rough subgraphs. Compute induced edges:
E[F∗ (X)] = E[{3, 4}] = {34},
E[F ∗ (X)] = E[{2, 3, 4, 5}] = {23, 24, 34, 45}.
Therefore,


H∗ (X, Y ) = F∗ (X), K∗ (Y ) ∩ E[F∗ (X)] = {3, 4}, {34} ,


H ∗ (X, Y ) = F ∗ (X), K ∗ (Y ) ∩ E[F ∗ (X)] = {2, 3, 4, 5}, {34} .
Hence the soft rough graph is

e
G(X,
Y ) = H∗ (X, Y ), H ∗ (X, Y ) .
Vertex 3 and 4 are definitely suspicious because one parameter (a2 ) isolates exactly {3, 4}.
Vertices 2 and 5 are possibly suspicious because they appear in parameter-blocks that overlap
the suspicious set.


# Page. 140

![Page Image](https://bcdn.docswell.com/page/GJWGXG3872.jpg)

Chapter 5
Conclusion
In this book, we surveyed rough set theory and a broad range of its major extensions, aiming to provide a compact map of the field from classical Pawlak approximations to relation/granulation-based generalizations, multi-granulation and weighted models, neighborhood and
probabilistic variants, and uncertainty-aware frameworks (e.g., fuzzy, intuitionistic fuzzy, neutrosophic, and plithogenic rough sets). We also collected several closely related viewpoints—
including rough graphs, rough topological spaces, rough groups, rough matroids, and soft rough
graphs—to help connect rough approximations with adjacent mathematical structures.
Looking ahead, we hope that future work will advance both (i) application-oriented case studies
that clarify which rough models are most effective for particular data characteristics (e.g., imbalance, noise, heterogeneous feature importance, or evolving/streaming observations), and (ii)
algorithmic development, including scalable attribute reduction, efficient computation of (possibly multi-stage) approximations, and reproducible benchmarking and software implementations
for modern decision support and explainable learning.
139


# Page. 141

![Page Image](https://bcdn.docswell.com/page/4EZL6LV973.jpg)



# Page. 142

![Page Image](https://bcdn.docswell.com/page/Y76W2W5D7V.jpg)

Disclaimer
Funding
This study did not receive any financial or external support from organizations or individuals.
Acknowledgments
We extend our sincere gratitude to everyone who provided insights, inspiration, and assistance
throughout this research. We particularly thank our readers for their interest and acknowledge
the authors of the cited works for laying the foundation that made our study possible. We
also appreciate the support from individuals and institutions that provided the resources and
infrastructure needed to produce and share this book. Finally, we are grateful to all those who
supported us in various ways during this project.
Data Availability
This research is purely theoretical, involving no data collection or analysis. We encourage future researchers to pursue empirical investigations to further develop and validate the concepts
introduced here.
Ethical Approval
As this research is entirely theoretical in nature and does not involve human participants or
animal subjects, no ethical approval is required.
Use of Generative AI and AI-Assisted Tools
We use generative AI and AI-assisted tools for tasks such as English grammar checking, and We
do not employ them in any way that violates ethical standards.
141


# Page. 143

![Page Image](https://bcdn.docswell.com/page/G75M2MY874.jpg)

Chapter 5. Conclusion
142
Conflicts of Interest
The authors confirm that there are no conflicts of interest related to the research or its publication.
Disclaimer
This work presents theoretical concepts that have not yet undergone practical testing or validation. Future researchers are encouraged to apply and assess these ideas in empirical contexts.
While every effort has been made to ensure accuracy and appropriate referencing, unintentional
errors or omissions may still exist. Readers are advised to verify referenced materials on their
own. The views and conclusions expressed here are the authors’ own and do not necessarily
reflect those of their affiliated organizations.


# Page. 144

![Page Image](https://bcdn.docswell.com/page/9J2949GVER.jpg)

Appendix (List of Tables)
1.1
1.2
1.3
2.1
2.2
2.3
Concise comparison of classical (crisp) sets and Pawlak rough sets. . . . . . . . .
At-a-glance taxonomy for rough-set–based models (granulation, value semantics,
outputs, and typical uses). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Practical selection guide: which rough-set family to use under typical data conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
7
8
Concise comparison of Pawlak rough sets and relation-based generalized rough sets. 11
Concise comparison of Pawlak rough sets and (m, n)-SuperHyperRough sets. . . 15
Concise comparison between Pawlak rough sets and weak rough sets. . . . . . . . 50
*
143


# Page. 145

![Page Image](https://bcdn.docswell.com/page/DEY4M4VQJM.jpg)



# Page. 146

![Page Image](https://bcdn.docswell.com/page/VJNYWYM278.jpg)

Bibliography
[1] Thomas Jech. Set theory: The third millennium edition, revised and expanded. Springer, 2003.
[2] Lotfi A Zadeh. Fuzzy sets. Information and control, 8(3):338–353, 1965.
[3] Krassimir T Atanassov.
39(5):5981–5986, 2020.
Circular intuitionistic fuzzy sets.
Journal of Intelligent &amp; Fuzzy Systems,
[4] Vicenç Torra. Hesitant fuzzy sets. International journal of intelligent systems, 25(6):529–539, 2010.
[5] Bui Cong Cuong. Picture fuzzy sets. Journal of Computer Science and Cybernetics, 30:409, 2015.
[6] Said Broumi, Mohamed Talea, Assia Bakali, and Florentin Smarandache. Single valued neutrosophic
graphs. Journal of New theory, 10:86–101, 2016.
[7] Haibin Wang, Florentin Smarandache, Yanqing Zhang, and Rajshekhar Sunderraman. Single valued neutrosophic sets. Infinite study, 2010.
[8] R Radha, A Stanis Arul Mary, and Florentin Smarandache. Quadripartitioned neutrosophic pythagorean
soft set. International Journal of Neutrosophic Science (IJNS) Volume 14, 2021, page 11, 2021.
[9] Rama Mallick and Surapati Pramanik. Pentapartitioned neutrosophic set and its properties, volume 36.
Infinite Study, 2020.
[10] Lin Wei. An integrated decision-making framework for blended teaching quality evaluation in college
english courses based on the double-valued neutrosophic sets. J. Intell. Fuzzy Syst., 45:3259–3266, 2023.
[11] Hu Zhao and Hong-Ying Zhang. On hesitant neutrosophic rough set over two universes and its application.
Artificial Intelligence Review, 53:4387–4406, 2020.
[12] Zdzisław Pawlak. Rough sets. International journal of computer &amp; information sciences, 11:341–356, 1982.
[13] Said Broumi, Florentin Smarandache, and Mamoni Dhar.
32:493–502, 2014.
Rough neutrosophic sets.
Infinite Study,
[14] Florentin Smarandache. Plithogenic set, an extension of crisp, fuzzy, intuitionistic fuzzy, and neutrosophic
sets-revisited. Infinite study, 2018.
[15] Florentin Smarandache.
Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic nSuperHyperGraph, and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra. Infinite Study, 2020.
[16] Feng Feng, Xiaoyan Liu, Violeta Leoreanu-Fotea, and Young Bae Jun. Soft sets and soft rough sets.
Information Sciences, 181(6):1125–1137, 2011.
[17] Dmitriy Molodtsov. Soft set theory-first results. Computers &amp; mathematics with applications, 37(45):19–31, 1999.
[18] Lotfi A Zadeh. A note on z-numbers. Information sciences, 181(14):2923–2932, 2011.
[19] Florentin Smarandache. A unifying field in logics: Neutrosophic logic. In Philosophy, pages 1–141. American Research Press, 1999.
[20] Naeem Jan, Tahir Mahmood, Lemnaouar Zedam, and Zeeshan Ali. Multi-valued picture fuzzy soft sets
and their applications in group decision-making problems. Soft Computing, 24:18857 – 18879, 2020.
[21] Yasmine M Ibrahim, Reem Essameldin, and Saad M Darwish. An adaptive hate speech detection approach
using neutrosophic neural networks for social media forensics. Computers, Materials &amp; Continua, 79(1),
2024.
[22] OM Khaled, AA Salama, Mostafa Herajy, MM El-Kirany, Huda E Khalid, Ahmed K Essa, and Ramiz
Sabbagh. A novel approach for cyber-attack detection in iot networks with neutrosophic neural networks.
Neutrosophic Sets and Systems, 86(1):48, 2025.
[23] Zdzislaw Pawlak, S. K. Michael Wong, Wojciech Ziarko, et al. Rough sets: probabilistic versus deterministic
approach. International Journal of Man-Machine Studies, 29(1):81–95, 1988.
[24] Rohit Kannan, Güzin Bayraksan, and James R Luedtke. Data-driven sample average approximation with
covariate information. Operations Research, 2025.
[25] Neil Mac Parthalain and Qiang Shen. Exploring the boundary region of tolerance rough sets for feature
selection. Pattern recognition, 42(5):655–667, 2009.
145


# Page. 147

![Page Image](https://bcdn.docswell.com/page/YE9PXPNDJ3.jpg)

Bibliography
146
[26] Roman W Świniarski. Rough sets methods in feature reduction and classification. International Journal
of Applied Mathematics and Computer Science, 2001.
[27] Salvatore Greco, Benedetto Matarazzo, and Roman Slowinski. Rough sets theory for multicriteria decision
analysis. European journal of operational research, 129(1):1–47, 2001.
[28] Shi-Yu Huang. Intelligent decision support: handbook of applications and advances of the rough sets theory.
Springer Science &amp; Business Media, 2013.
[29] Rafael Bello and Rafael Falcon. Rough sets in machine learning: a review. Thriving Rough Sets: 10th
Anniversary-Honoring Professor Zdzisław Pawlak’s Life and Legacy &amp; 35 Years of Rough Sets, pages
87–118, 2017.
[30] Wanting Ji, Yan Pang, Xiaoyun Jia, Zhongwei Wang, Feng Hou, Baoyan Song, Mingzhe Liu, and Ruili
Wang. Fuzzy rough sets and fuzzy rough neural networks for feature selection: A review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(3):e1402, 2021.
[31] Jacek Jelonek, Krzysztof Krawiec, and Roman Slowiński. Rough set reduction of attributes and their
domains for neural networks. Computational intelligence, 11(2):339–347, 1995.
[32] Georg Peters, Pawan Lingras, Dominik Lzak, and Yiyu Yao. Rough sets: Selected methods and applications
in management and engineering. In Advanced Information and Knowledge Processing, 2012.
[33] Mitsuo Nagamachi, Y Okazaki, and Mizuho Ishikawa. Kansei engineering and application of the rough
sets model. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control
Engineering, 220:763 – 768, 2006.
[34] M Hosny and Tareq Al-shami. Employing a generalization of open sets defined by ideals to initiate
novel rough approximation spaces with a chemical application. European Journal of Pure and Applied
Mathematics, 17(4):3436–3463, 2024.
[35] Youfang Cao, Shi Liu, Lida Zhang, Jie Qin, Jiang Wang, and Kexuan Tang. Prediction of protein structural
class with rough sets. BMC bioinformatics, 7(1):20, 2006.
[36] Lixiang Shen, Francis EH Tay, Liangsheng Qu, and Yudi Shen. Fault diagnosis using rough sets theory.
Computers in industry, 43(1):61–72, 2000.
[37] Zdzisław Pawlak. Rough sets: Theoretical aspects of reasoning about data, volume 9. Springer Science &amp;
Business Media, 2012.
[38] Zdzislaw Pawlak. Rough set theory and its applications to data analysis.
29(7):661–688, 1998.
Cybernetics &amp; Systems,
[39] William Zhu. Generalized rough sets based on relations. Information Sciences, 177(22):4997–5011, 2007.
[40] Zhi Pei, Daowu Pei, and Li Zheng. Topology vs generalized rough sets. International Journal of Approximate Reasoning, 52(2):231–239, 2011.
[41] William Zhu and Fei-Yue Wang. Reduction and axiomization of covering generalized rough sets. Information sciences, 152:217–230, 2003.
[42] JingTao Yao, Davide Ciucci, and Yan Zhang. Generalized rough sets. Springer Handbook of Computational
Intelligence, pages 413–424, 2015.
[43] Hai Yu and Wan-rong Zhan. On the topological properties of generalized rough sets. Information Sciences,
263:141–152, 2014.
[44] Takaaki Fujita. Short introduction to rough, hyperrough, superhyperrough, treerough, and multirough set.
Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy,
Neutrosophic, Soft, Rough, and Beyond, page 394, 2025.
[45] Takaaki Fujita. Hierarchical (m, n)-superhypersoft and (m, n)-superhyperrough sets: More unified multilayer framework for advanced uncertainty modeling. Authorea Preprints, 2025.
[46] Tinghuai Ma and Meili Tang. Weighted rough set model. In Sixth International Conference on Intelligent
Systems Design and Applications, volume 1, pages 481–485. IEEE, 2006.
[47] Nayani Sateesh, Pasupureddy Srinivasa Rao, and Davuluri Rajya Lakshmi. Optimized ensemble learningbased student’s performance prediction with weighted rough set theory enabled feature mining. Concurrency and Computation: Practice and Experience, 35(7):e7601, 2023.
[48] Jinfu Liu, Qinghua Hu, and Daren Yu. Weighted rough set learning: towards a subjective approach. In
Advances in Knowledge Discovery and Data Mining: 11th Pacific-Asia Conference, PAKDD 2007, Nanjing,
China, May 22-25, 2007. Proceedings 11, pages 696–703. Springer, 2007.
[49] Nagaraju Aitha and Ramachandram Srinadas. A strategy to reduce the control packet load of aodv
using weighted rough set model for manet. The International Arab Journal of Information Technology,
8(1):108–117, 2009.
[50] Changzhong Wang, Changyue Wang, Yuhua Qian, and Qiangkui Leng. Feature selection based on weighted
fuzzy rough sets. IEEE Transactions on Fuzzy Systems, 32(7):4027–4037, 2024.
[51] Zhong Yuan, Baiyang Chen, Jia Liu, Hongmei Chen, Dezhong Peng, and Peilin Li. Anomaly detection
based on weighted fuzzy-rough density. Applied Soft Computing, 134:109995, 2023.


# Page. 148

![Page Image](https://bcdn.docswell.com/page/GE8D2DK5ED.jpg)

147
Bibliography
[52] Jinfu Liu, Qinghua Hu, and Daren Yu. A weighted rough set based method developed for class imbalance
learning. Information Sciences, 178(4):1235–1256, 2008.
[53] Qinghua Hu, Daren Yu, Jinfu Liu, and Congxin Wu. Neighborhood rough set based heterogeneous feature
subset selection. Information sciences, 178(18):3577–3594, 2008.
[54] Tareq M Al-shami and Davide Ciucci.
237:107868, 2022.
Subset neighborhood rough sets.
Knowledge-Based Systems,
[55] Changzhong Wang, Mingwen Shao, Qiang He, Yuhua Qian, and Yali Qi. Feature subset selection based
on fuzzy neighborhood rough sets. Knowledge-Based Systems, 111:173–179, 2016.
[56] Yumin Chen, Yu Xue, Ying Ma, and Feifei Xu. Measures of uncertainty for neighborhood rough sets.
Knowledge-Based Systems, 120:226–235, 2017.
[57] TV Soumya and MK Sabu. A game-theoretic sequential three-way decision using probabilistic rough sets
and multiple levels of granularity. Discover Computing, 28(1):211, 2025.
[58] Wenyan Xu, Yucong Yan, and Xiaonan Li. Sequential rough set: a conservative extension of pawlak’s
classical rough set. Artificial Intelligence Review, 58(1):1–33, 2025.
[59] Takaaki Fujita, Raed Hatamleh, and Ahmed Salem Heilat. Contrasoft set and contrarough set with using
upside-down logic. Statistics, Optimization &amp; Information Computing, 2025.
[60] Yiyu Yao. Probabilistic rough set approximations.
49(2):255–271, 2008.
International journal of approximate reasoning,
[61] Yiyu Yao, Salvatore Greco, and Roman Słowiński. Probabilistic rough sets. Springer handbook of computational intelligence, pages 387–411, 2015.
[62] Nouman Azam and JingTao Yao. Analyzing uncertainties of probabilistic rough set regions with gametheoretic rough sets. International journal of approximate reasoning, 55(1):142–155, 2014.
[63] Xinru Li, Lingqiang Li, and Chengzhao Jia. Weighted multi-granularity fuzzy probabilistic rough set
based on semi-overlapping function and its application in three-way decision. Eng. Appl. Artif. Intell.,
161:112023, 2025.
[64] T. V. Soumya and M. K. Sabu. A game-theoretic sequential three-way decision using probabilistic rough
sets and multiple levels of granularity. Discover Computing, 28, 2025.
[65] Yan Lindsay Sun, Bin Pang, Ju-Sheng Mi, and Wei-Zhi Wu. Maximal consistent blocks-based optimistic and pessimistic probabilistic rough fuzzy sets and their applications in three-way multiple attribute
decision-making. Int. J. Approx. Reason., 187:109529, 2025.
[66] Jie Hu, Tianrui Li, Chuan Luo, Hamido Fujita, and Shaoyong Li. Incremental fuzzy probabilistic rough
sets over two universes. International Journal of Approximate Reasoning, 81:28–48, 2017.
[67] Zhan’ao Xue, Li-Ping Zhao, Min Zhang, and Bing-Xin Sun. Three-way decisions based on multigranulation support intuitionistic fuzzy probabilistic rough sets. Journal of Intelligent &amp; Fuzzy Systems,
38(4):5013–5031, 2020.
[68] Prasenjit Mandal and AS Ranadive. Multi-granulation interval-valued fuzzy probabilistic rough sets and
their corresponding three-way decisions based on interval-valued fuzzy preference relations. Granular
Computing, 4(1):89–108, 2019.
[69] Manish Aggarwal. Probabilistic variable precision fuzzy rough sets. IEEE Transactions on Fuzzy Systems,
24(1):29–39, 2015.
[70] Wojciech Ziarko and Ning Shan. Machine learning: rough sets perspective. In Proceedings of International
Conference on Expert Systems for Development, pages 114–118. IEEE, 1994.
[71] Said Broumi, D Nagarajan, Michael Gr Voskoglou, and Seyyed Ahmad Edalatpanah. Data-driven Modelling with Fuzzy Sets: A Neutrosophic Perspective. CRC Press, 2024.
[72] Said Broumi. Handbook of research on the applications of neutrosophic sets theory and their extensions in
education. IGI Global, 2023.
[73] T. Fujita. Note of indetermrough set and indetermhyperrough set. Information Sciences with Applications,
7:1–14, 2025.
[74] Li Li. Indetermsoft set for talent training quality assessment in university engineering management under
the background of” dual carbon”. Neutrosophic Sets and Systems, 83:148–158, 2025.
[75] Tao Shen and Chunmei Mao. Indetermsoft set for digital marketing effectiveness evaluation driven by big
data based on consumer behavior. Neutrosophic Sets and Systems, 82:352–369, 2025.
[76] Ling Chen and Chunpeng Liu. Risk management of import and export products based on big data analysis:
Assessment model with indetermsoft set. Neutrosophic Sets and Systems, 82:742–756, 2025.
[77] Vicenç Torra and Yasuo Narukawa. On hesitant fuzzy sets and decision. In 2009 IEEE international
conference on fuzzy systems, pages 1378–1382. IEEE, 2009.
[78] Abhijit Saha, Irfan Deli, and Said Broumi. Hesitant triangular neutrosophic numbers and their applications
to madm. Neutrosophic Sets and Systems, 35:269–298, 2020.


# Page. 149

![Page Image](https://bcdn.docswell.com/page/LELM2MZ37R.jpg)

Bibliography
148
[79] Yiwei Chen, Qiu Xie, Xiaoyu Ma, and Yuwei Li. Optimizing site selection for construction and demolition waste resource treatment plants using a hesitant neutrosophic set: a case study in xiamen, china.
Engineering Optimization, 57(11):3186–3207, 2025.
[80] Run-yu Zhang, Mingyang Yu, and Yan Gao. A multi-attribute VIKOR decision-making method based on
hesitant neutrosophic sets. Infinite Study, 2017.
[81] Takaaki Fujita. Review note on graphic and cluster extensions of fuzzy, neutrosophic, soft, and rough sets,
2025.
[82] Juanjuan Chen, Shenggang Li, Shengquan Ma, and Xueping Wang. m-polar fuzzy sets: an extension of
bipolar fuzzy sets. The scientific world journal, 2014(1):416530, 2014.
[83] V Rajam and N Rajesh. Multipolar neutrosophic subalgebras/ideals of up-algebras. International Journal
of Neutrosophic Science (IJNS), 23(4), 2024.
[84] Limin Wu. Multipolar interval-valued neutrosophic soft set for integrating sustainability into logistics: A
performance-based evaluation of green supply chains. Neutrosophic Sets and Systems, 88:988–998, 2025.
[85] Muhammad Saqlain, Muhammad Riaz, Natasha Kiran, Poom Kumam, and Miin-Shen Yang. Water quality
evaluation using generalized correlation coefficient for m-polar neutrosophic hypersoft sets. Neutrosophic
Sets and Systems, vol. 55/2023: An International Journal in Information Science and Engineering, page 58,
2024.
[86] Takaaki Fujita and Florentin Smarandache. A Dynamic Survey of Fuzzy, Intuitionistic Fuzzy, Neutrosophic,
Plithogenic, and Extensional Sets. Neutrosophic Science International Association (NSIA), 2025.
[87] M Myvizhi, Ahmed A Metwaly, and Ahmed M Ali. Treesoft approach for refining air pollution analysis:
A case study. Neutrosophic Sets and Systems, 68(1):17, 2024.
[88] Florentin Smarandache. Treesoft set vs. hypersoft set and fuzzy-extensions of treesoft sets. HyperSoft Set
Methods in Engineering, 2024.
[89] Takaaki Fujita. Polytree-soft sets and polyforest-soft sets: A directed acyclic framework for soft set
modeling. HyperSoft Set Methods in Engineering, 4:11–23, 2025.
[90] Li Song, Jianyong Liu, Han Ding, and Wenhui Zhang. Forestsoft set for mechanical automation production control systems analysis based on an intelligent manufacturing environment. Neutrosophic Sets and
Systems, 85:229–254, 2025.
[91] Hairong Luo. Forestsoft set approach for estimating innovation and entrepreneurship education in universities through a hierarchical and uncertainty-aware analytical framework. Neutrosophic Sets and Systems,
86(1):21, 2025.
[92] Dong Ya Li and Bao Qing Hu. A kind of dynamic rough sets. In Fourth International Conference on Fuzzy
Systems and Knowledge Discovery (FSKD 2007), volume 3, pages 79–85. IEEE, 2007.
[93] Tom Longshaw and Sue Haines. Dynamic rough sets. In Proceedings of 3rd International Symposium
on Uncertainty Modeling and Analysis and Annual Conference of the North American Fuzzy Information
Processing Society, pages 292–295. IEEE, 1995.
[94] Walid Moudani, Ahmad Shahin, Fadi Chakik, and Félix Mora-Camino. Dynamic rough sets features
reduction. International Journal of Computer Science and Information Security, 9(4):pp–1, 2011.
[95] Yi Cheng, Duoqian Miao, and Qinrong Feng. A novel approach to generating fuzzy rules based on dynamic
fuzzy rough sets. In 2007 IEEE International Conference on Granular Computing (GRC 2007), pages
133–133. IEEE, 2007.
[96] Xiaowei Wei, Bin Pang, and Ju-Sheng Mi. Axiomatic characterizations of l-valued rough sets using a single
axiom. Information Sciences, 580:283–310, 2021.
[97] Lingqiang Li and Qiu Jin. A novel axiomatic approach to l-valued rough sets within an l-universe via inner
product and outer product of l-subsets. International Journal of Approximate Reasoning, 181:109416,
2025.
[98] Bin Pang and Ju-Sheng Mi. Using single axioms to characterize l-rough approximate operators with respect
to various types of l-relations. International Journal of Machine Learning and Cybernetics, 11(5):1061–1082,
2020.
[99] Chang-jie Zhou and Wei Yao. Lifts of l-valued powerset mapping systems. Hacettepe Journal of Mathematics and Statistics, pages 1–7, 2025.
[100] Nistala VES Murthy and Jami L Prasanna. A theory of lattice-valued fuzzy sets and fuzzy maps between
different lattice-valued fuzzy sets–revisited. International Journal of Advanced Research in Computer
Science, 4(1), 2013.
[101] Aiyared Iampan, Akarachai Satirad, Ronnason Chinram, Rukchart Prasertpong, and Pongpun Julatha.
Lattice valued fuzzy sets in up (bcc)-algebras. International Journal of Analysis and Applications, 21:59–59,
2023.
[102] Yongming Li. Lattice-valued fuzzy turing machines: Computing power, universality and efficiency. Fuzzy
sets and systems, 160(23):3453–3474, 2009.


# Page. 150

![Page Image](https://bcdn.docswell.com/page/4JMY8YVMJW.jpg)

149
Bibliography
[103] Xianyong Zhang, Zhiwen Mo, Fang Xiong, and Wei Cheng. Comparative study of variable precision rough
set model and graded rough set model. International Journal of Approximate Reasoning, 53(1):104–116,
2012.
[104] Bo Wen Fang and Bao Qing Hu. Probabilistic graded rough set and double relative quantitative decisiontheoretic rough set. International Journal of Approximate Reasoning, 74:1–12, 2016.
[105] Weihua Xu, Shihu Liu, Qiaorong Wang, and Wenxiu Zhang. The first type of graded rough set based on
rough membership function. In 2010 Seventh International Conference on Fuzzy Systems and Knowledge
Discovery, volume 4, pages 1922–1926. IEEE, 2010.
[106] Caihui Liu, Duoqian Miao, and Nan Zhang. Graded rough set model based on two universes and its
properties. Knowledge-Based Systems, 33:65–72, 2012.
[107] Manish Agarwal and Themis Palpanas. Linguistic rough sets. International Journal of Machine Learning
and Cybernetics, 7:953–966, 2016.
[108] Tomasz Witczak. On the algebra of possibly paraconsistent sets. Neutrosophic Sets and Systems, 73(1):52,
2024.
[109] L Yong-jin. Weak rough numbers.
[110] Dun Liu, Yiyu Yao, and Tianrui Li. Three-way investment decisions with decision-theoretic rough sets.
Int. J. Comput. Intell. Syst., 4:66–74, 2011.
[111] Decui Liang and Dun Liu. Deriving three-way decisions from intuitionistic fuzzy decision-theoretic rough
sets. Inf. Sci., 300:28–48, 2015.
[112] Decui Liang and Dun Liu. A novel risk decision making based on decision-theoretic rough sets under
hesitant fuzzy information. IEEE Transactions on Fuzzy Systems, 23:237–247, 2015.
[113] Decui Liang and Dun Liu. Systematic studies on three-way decisions with interval-valued decision-theoretic
rough sets. Inf. Sci., 276:186–203, 2014.
[114] Junren Luo, Wanpeng Zhang, Jiongming Su, and Jing Chen. Decision-theoretic rough sets for three-way
decision-making in dilemma reasoning and conflict resolution. Mathematics, 2025.
[115] Junxiao Ren, Xin Chang, Ying Hou, and Boyuan Cao. Probabilistic hesitant fuzzy decision-theoretic rough
set model and its application in supervision of shared parking. Sustainability, 2023.
[116] Luyuan Chen and Yong Deng. Gdtrset: a generalized decision-theoretic rough sets based on evidence
theory. Artificial Intelligence Review, 56:3341 – 3362, 2023.
[117] Ying Yu, Ming Wan, Jin Qian, Duoqian Miao, Zhiqiang Zhang, and Pengfei Zhao. Feature selection
for multi-label learning based on variable-degree multi-granulation decision-theoretic rough sets. Int. J.
Approx. Reason., 169:109181, 2024.
[118] Hai-Long Yang and Zhi-Lian Guo. Multigranulation decision-theoretic rough sets in incomplete information
systems. International Journal of Machine Learning and Cybernetics, 6(6):1005–1018, 2015.
[119] Yuhua Qian, Xinyan Liang, Guoping Lin, Qian Guo, and Jiye Liang. Local multigranulation decisiontheoretic rough sets. International Journal of Approximate Reasoning, 82:119–137, 2017.
[120] Wentao Li and Weihua Xu. Multigranulation decision-theoretic rough set in ordered information system.
Fundamenta Informaticae, 139(1):67–89, 2015.
[121] Dajun Ye, Decui Liang, Tao Li, and Shujing Liang. Multi-class decision-making method for decisiontheoretic rough sets based on the constructive covering algorithm. IEEE Access, 8:57833–57848, 2020.
[122] Mohammad Hossein Fazel Zarandi, R. Gamasaee, and Oscar Castillo. Type-1 to type-n fuzzy logic and
systems. In Fuzzy Logic in Its 50th Year, 2016.
[123] Smriti Srivastava and Rajesh Kumar. Design and application of a novel higher-order type-n fuzzy-logicbased system for controlling the steering angle of a vehicle: a soft computing approach. Soft Computing,
28(6):4743–4758, 2024.
[124] Marwan H Hassan, Saad M Darwish, and Saleh M Elkaffas. Type-2 neutrosophic set and their applications
in medical databases deadlock resolution. Computers, Materials &amp; Continua, 74(2), 2023.
[125] Soumen Kumar Das, F Yu Vincent, Sankar Kumar Roy, and Gerhard Wilhelm Weber. Location–allocation
problem for green efficient two-stage vehicle-based logistics system: A type-2 neutrosophic multi-objective
modeling approach. Expert Systems with Applications, 238:122174, 2024.
[126] Muslem Al-Saidi, Áron Ballagi, Oday Ali Hassen, and Saad M Saad. Type-2 neutrosophic markov chain
model for subject-independent sign language recognition: A new uncertainty–aware soft sensor paradigm.
Sensors (Basel, Switzerland), 24(23):7828, 2024.
[127] Khizar Hayat, Muhammad Irfan Ali, Bing yuan Cao, and Xiaopeng Yang. A new type-2 soft set: Type-2
soft graphs and their applications. Adv. Fuzzy Syst., 2017:6162753:1–6162753:17, 2017.
[128] Salvatore Greco, Benedetto Matarazzo, Roman Slowinski, and Jerzy Stefanowski. Variable consistency
model of dominance-based rough sets approach. In Rough Sets and Current Trends in Computing: Second
International Conference, RSCTC 2000 Banff, Canada, October 16–19, 2000 Revised Papers 2, pages
170–181. Springer, 2001.


# Page. 151

![Page Image](https://bcdn.docswell.com/page/PJR9592L79.jpg)

Bibliography
150
[129] Shaoyong Li, Tianrui Li, and Dun Liu. Incremental updating approximations in dominance-based rough
sets approach under the variation of the attribute set. Knowledge-Based Systems, 40:17–26, 2013.
[130] Bing Huang, Huaxiong Li, Guofu Feng, and Xianzhong Zhou. Dominance-based rough sets in multi-scale
intuitionistic fuzzy decision tables. Applied Mathematics and Computation, 348:487–512, 2019.
[131] Piotr Sawicki and Jacek Żak. The application of dominance-based rough sets theory for the evaluation of
transportation systems. Procedia-Social and Behavioral Sciences, 111:1238–1248, 2014.
[132] Bing Huang, Hua-xiong Li, and Da-kuan Wei. Dominance-based rough set model in intuitionistic fuzzy
information systems. Knowledge-Based Systems, 28:115–123, 2012.
[133] Salvatore Greco, Benedetto Matarazzo, and Roman Słowiński. Granular computing and data mining for
ordered data: The dominance-based rough set approach. In Granular, Fuzzy, and Soft Computing, pages
117–145. Springer, 2023.
[134] Iftikhar Ul Haq, Tanzeela Shaheen, Hamza Ghazanfar Toor, Tapan Senapati, and Sarbast Moslem. A
novel framework of pythagorean fuzzy dominance-based rough sets and analysis of knowledge reductions.
IEEE Access, 11:110656–110669, 2023.
[135] Xia Liu, Xianyong Zhang, and Benwei Chen. Feature selections based on fuzzy probability dominance
rough sets in interval-valued ordered decision systems. International Journal of Machine Learning and
Cybernetics, 16:5365 – 5395, 2025.
[136] Chaoxiang Yang, Jianxin Cheng, and Xin Wang. Hybrid quality function deployment method for innovative
new product design based on the theory of inventive problem solving and kansei evaluation. Advances in
Mechanical Engineering, 11, 2019.
[137] James F. Baldwin and Sachin Baban Karale. Asymmetric triangular fuzzy sets for classification models.
In International Conference on Knowledge-Based Intelligent Information &amp; Engineering Systems, 2003.
[138] Didier Dubois, Laurent Foulloy, Gilles Mauris, and Henri Prade. Probability-possibility transformations,
triangular fuzzy sets, and probabilistic inequalities. Reliable computing, 10(4):273–297, 2004.
[139] James F Baldwin and Sachin B Karale. Asymmetric triangular fuzzy sets for classification models. In
International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, pages
364–370. Springer, 2003.
[140] Jianping Fan, Xuefei Jia, and Meiqin Wu. A new multi-criteria group decision model based on single-valued
triangular neutrosophic sets and edas method. Journal of Intelligent &amp; Fuzzy Systems, 38(2):2089–2102,
2020.
[141] Jianping Fan, Xuefei Jia, and Meiqin Wu. Green supplier selection based on dombi prioritized bonferroni
mean operator with single-valued triangular neutrosophic sets. International Journal of Computational
Intelligence Systems, 12(2):1091–1101, 2019.
[142] Nouman Azam and JingTao Yao. Game-theoretic rough sets for feature selection. In Rough Sets and
Intelligent Systems-Professor Zdzisław Pawlak in Memoriam: Volume 2, pages 61–78. Springer, 2013.
[143] JingTao Yao and Nouman Azam. Web-based medical decision support systems for three-way medical
decision making with game-theoretic rough sets. IEEE Transactions on Fuzzy Systems, 23(1):3–15, 2014.
[144] Nouman Azam, Yan Zhang, and JingTao Yao. Evaluation functions and decision conditions of three-way
decisions with game-theoretic rough sets. European Journal of Operational Research, 261(2):704–714, 2017.
[145] Rahma Hellali, Zaineb Chelly Dagdia, and Karine Zeitouni. Clustering corticosteroids responsiveness in
sepsis patients using game-theoretic rough sets. 2023 18th Conference on Computer Science and Intelligence
Systems (FedCSIS), pages 545–556, 2023.
[146] Suby Singh and Jingtao Yao. Pneumonia detection with game-theoretic rough sets. 2021 20th IEEE
International Conference on Machine Learning and Applications (ICMLA), pages 1029–1034, 2021.
[147] Yixing Chen and Jingtao Yao. Sentiment analysis using part-of-speech-based feature extraction and gametheoretic rough sets. 2021 International Conference on Data Mining Workshops (ICDMW), pages 110–117,
2021.
[148] Yue Zhou, Y. Zhang, and Jingtao Yao. Satirical news detection with semantic feature extraction and
game-theoretic rough sets. In International Syposium on Methodologies for Intelligent Systems, 2020.
[149] Ju-Sheng Mi, Wei-Zhi Wu, and Wen-Xiu Zhang. Approaches to knowledge reduction based on variable
precision rough set model. Information sciences, 159(3-4):255–272, 2004.
[150] Wojciech Ziarko. Variable precision rough set model. Journal of computer and system sciences, 46(1):39–59,
1993.
[151] Suyun Zhao, Eric CC Tsang, and Degang Chen. The model of fuzzy variable precision rough sets. IEEE
transactions on Fuzzy Systems, 17(2):451–467, 2009.
[152] Malcolm Beynon. Reducts within the variable precision rough sets model: a further investigation. European
journal of operational research, 134(3):592–605, 2001.
[153] Chao-Ton Su and Jyh-Hwa Hsu. Precision parameter in the variable precision rough sets model: an
application. Omega, 34(2):149–157, 2006.


# Page. 152

![Page Image](https://bcdn.docswell.com/page/PEXQKQP6JX.jpg)

151
Bibliography
[154] Ayat A. Temraz and Kamal El-Saady. L-valued variable precision rough sets. International Journal of
General Systems, 54:37 – 70, 2024.
[155] Ajay Mani and Sushmita Mitra. Granular generalized variable precision rough sets and rational approximations. ArXiv, abs/2205.14365, 2022.
[156] Li Zhang and Ping Zhu. Generalized fuzzy variable precision rough sets based on bisimulations and the
corresponding decision-making. International Journal of Machine Learning and Cybernetics, 13:2313 –
2344, 2022.
[157] Wei Liu, Qihan Liu, Guoju Ye, Dafang Zhao, Yating Guo, and Fangfang Shi. An interval rough number
variable precision rough sets model and its attribute reduction. Journal of Intelligent &amp; Fuzzy Systems,
45:229 – 238, 2023.
[158] Xi-Bei Yang, XN Song, HL Dou, and JY Yang. Multi-granulation rough set: from crisp to fuzzy case.
Annals of Fuzzy Mathematics and Informatics, 1(1):55–70, 2011.
[159] Yuhua Qian, Jiye Liang, Yiyu Yao, and Chuangyin Dang. Mgrs: A multi-granulation rough set. Information sciences, 180(6):949–970, 2010.
[160] Weihua Xu, Qiaorong Wang, and Xiantao Zhang. Multi-granulation rough sets based on tolerance relations.
Soft Computing, 17:1241–1252, 2013.
[161] Saba Ayub, Waqas Mahmood, Muhammad Shabir, Ali NA Koam, and Rizwan Gul. A study on soft
multi-granulation rough sets and their applications. IEEE Access, 10:115541–115554, 2022.
[162] S Senthil Kumar and H Hannah Inbarani. Optimistic multi-granulation rough set based classification for
medical diagnosis. Procedia Computer Science, 47:374–382, 2015.
[163] Jingqian Wang, Xiaohong Zhang, and Lingling Mao. An owa-based multi-granulation fuzzy rough set
model using choquet integrals and its applications. Fuzzy Sets Syst., 521:109595, 2025.
[164] Saba Ayub, Waqas Mahmood, Muhammad Shabir, Ali N. A. Koam, and Rizwana Gul. A study on soft
multi-granulation rough sets and their applications. IEEE Access, 10:115541–115554, 2022.
[165] Junxiao Ren and Bo Cao. Hesitant fuzzy multi-granulation rough set model based on similarity assessment.
Symmetry, 17:1903, 2025.
[166] Kholood Mohammad Alsager. Decision-making framework based on multineutrosophic soft rough sets.
Mathematical Problems in Engineering, 2022(1):2868970, 2022.
[167] S. A. El-Sheikh, S. A. Kandil, and S. H. Shalil. Increasing and decreasing soft rough set approximations.
Int. J. Fuzzy Log. Intell. Syst., 23:425–435, 2023.
[168] Shawkat Alkhazaleh and Emad A. Marei. New soft rough set approximations. Int. J. Fuzzy Log. Intell.
Syst., 21:123–134, 2021.
[169] Niladri Chatterjee, Aayush Singha Roy, and Nidhika Yadav. Soft rough set based span for unsupervised
keyword extraction. J. Intell. Fuzzy Syst., 42:4379–4386, 2021.
[170] Muhammad Riaz and Masooma Raza Hashmi. Soft rough pythagorean m-polar fuzzy sets and pythagorean
m-polar fuzzy soft rough sets with application to decision-making. Computational and Applied Mathematics, 39, 2019.
[171] Tasawar Abbas, Rehan Zafar, Sana Anjum, Ambreen Ayub, and Zamir Hussain. An innovative soft rough
dual hesitant fuzzy sets and dual hesitant fuzzy soft rough sets. VFAST Transactions on Mathematics,
2023.
[172] Fu Zhang, Weimin Ma, and Hongwei Ma. Dynamic chaotic multi-attribute group decision making under
weighted t-spherical fuzzy soft rough sets. Symmetry, 15:307, 2023.
[173] Muhammad Abdullah, Khuram Ali Khan, Atiqe Ur Rahman, and Michael Kikomba Kahungu. An intelligent decision-support system for climate change mitigation using similarity measures of hypersoft rough
sets. International Journal of Computational Intelligence Systems, 2026.
[174] V. S. Subha and R. Selvakumar. An innovative electric vehicle selection with multi-criteria decisionmaking approach in indian brands on a neutrosophic hypersoft rough set by using reduct and core. 2024
Fourth International Conference on Advances in Electrical, Computing, Communication and Sustainable
Technologies (ICAECT), pages 1–7, 2024.
[175] Piyu Li, Jing Liu, Zhi Kong, Wen-Li Liu, and Chang-Tao Xue. On modified soft rough sets (msr-sets).
2017 29th Chinese Control And Decision Conference (CCDC), pages 254–257, 2017.
[176] S Senthilkumar, H Hannah Inbarani, and S Udhayakumar. Modified soft rough set for multiclass classification. In Computational Intelligence, Cyber Security and Computational Models: Proceedings of ICC3,
2013, pages 379–384. Springer, 2013.
[177] S Senthil Kumar and H Hannah Inbarani. Modified soft rough set based ecg signal classification for cardiac
arrhythmias. In Big Data in Complex Systems: Challenges and Opportunities, pages 445–470. Springer,
2015.
[178] Fatia Fatimah. N-soft sets: Literature reviewand research potential. In Proceeding International Seminar
of Science and Technology, volume 1, pages 27–39, 2021.


# Page. 153

![Page Image](https://bcdn.docswell.com/page/3EK959DGED.jpg)

Bibliography
152
[179] Di Zhang, Pi-Yu Li, and Shuang An. N-soft rough sets and its applications. Journal of Intelligent &amp; Fuzzy
Systems, 40(1):565–573, 2021.
[180] Adem Yolcu, Aysun Benek, and Taha Yasin Öztürk. A new approach to neutrosophic soft rough sets.
Knowledge and Information Systems, 65:2043–2060, 2023.
[181] Ashraf Al-Quran, Nasruddin Hassan, and Emad A. Marei. A novel approach to neutrosophic soft rough
set under uncertainty. Symmetry, 11:384, 2019.
[182] Aysun Benek and Taha Yasin Ozturk. A comparative analysis of two different decision-making methods
in neutrosophic soft rough set environments. OPSEARCH, pages 1–22, 2025.
[183] Muhammad Shabir, Muhammad Irfan Ali, and Tanzeela Shaheen. Another approach to soft rough sets.
Knowledge-Based Systems, 40:72–80, 2013.
[184] Zhaowen Li and Tusheng Xie. The relationship among soft sets, soft rough sets and topologies. Soft
Computing, 18(4):717–728, 2014.
[185] Muhammad Ihsan, Muhammad Saeed, Atiqe Ur Rahman, and Florentin Smarandache. Multi-attribute
decision support model based on bijective hypersoft expert set. Punjab University Journal of Mathematics,
54(1), 2022.
[186] Yousef Al-Qudah and Nasruddin Hassan. Complex multi-fuzzy soft expert set and its application. International Journal of Mathematics and Computer Science, 14:149–176, 2019.
[187] Ashraf Al-Quran and Nasruddin Hassan. The complex neutrosophic soft expert set and its application in
decision making. Journal of Intelligent &amp; Fuzzy Systems, 34(1):569–582, 2018.
[188] Faisal Al-Sharqi, Abd Ghafur Ahmad, and Ashraf Al-Quran. Interval-valued neutrosophic soft expert set
from real space to complex space. CMES-Computer Modeling in Engineering &amp; Sciences, 132(1), 2022.
[189] Srinivasan Vijayabalaji. Soft-rough expert set. In Soft Computing, pages 238–249. CRC Press, 2024.
[190] William Zhu and Fei-Yue Wang. On three types of covering-based rough sets. IEEE transactions on
knowledge and data engineering, 19(8):1131–1144, 2007.
[191] William Zhu and Fei-Yue Wang. The fourth type of covering-based rough sets. Information Sciences,
201:80–92, 2012.
[192] Yiyu Yao and Bingxue Yao. Covering based rough set approximations. Information Sciences, 200:91–107,
2012.
[193] Bin Yang and Bao Qing Hu. On some types of fuzzy covering-based rough sets. Fuzzy sets and Systems,
312:36–65, 2017.
[194] Changzhong Wang, Mingwen Shao, Baiqing Sun, and Qinghua Hu. An improved attribute reduction
scheme with covering based rough sets. Applied Soft Computing, 26:235–243, 2015.
[195] Piotr Pieta and Tomasz Szmuc. Applications of rough sets in big data analysis: An overview. International
Journal of Applied Mathematics and Computer Science, 31:659 – 683, 2021.
[196] Yuhua Qian, Xinyan Liang, Guoping Lin, Qian Guo, and Jiye Liang. Local multigranulation decisiontheoretic rough sets. Int. J. Approx. Reason., 82:119–137, 2017.
[197] Guoqiang Wang, Tianrui Li, Pengfei Zhang, Qianqian Huang, and Hongmei Chen. Double-local rough sets
for efficient data mining. Inf. Sci., 571:475–498, 2021.
[198] Mengmeng Li, Chiping Zhang, Minghao Chen, and Weihua Xu. On local multigranulation covering
decision-theoretic rough sets. Journal of Intelligent &amp; Fuzzy Systems, 40:11107 – 11130, 2021.
[199] Bingzhen Sun, Zengtai Gong, and De gang Chen. Fuzzy rough set theory for the interval-valued fuzzy
information systems. Inf. Sci., 178:2794–2815, 2008.
[200] Feifei Xu, Zhongqin Bi, and Jingsheng Lei. Approximate reduction for the interval-valued decision table.
In Rough Sets and Knowledge Technology, 2014.
[201] Yunsong Qi and Xibei Yang. Interval-valued analysis for discriminative gene selection and tissue sample
classification using microarray data. Genomics, pages 38–48, 2013.
[202] Lingyu Tang, Zhiwen Mo, and Xianyong Zhang. Uncertainty measures for multi-granulation intervalvalued decision systems. 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge
Engineering (ISKE), pages 156–161, 2019.
[203] Sheela Ramanna, James Francis Peters, and Cenker Sengoz. Application of tolerance rough sets in structured and unstructured text categorization: a survey. In Thriving Rough Sets: 10th Anniversary-Honoring
Professor Zdzisław Pawlak’s Life and Legacy &amp; 35 Years of Rough Sets, pages 119–138. Springer, 2017.
[204] Yi-Chung Hu. Pattern classification using grey tolerance rough sets. Kybernetes, 45(2):266–281, 2016.
[205] Jarosław Stepaniuk and M Kretowski. Decision system based on tolerance rough sets. In Proceedings of
the Fourth International Workshop on Intelligent Information Systems, Augustow, Poland, pages 62–73,
1995.
[206] Hao Xiumei, Fu Haiyan, and Shi Kaiquan. S-rough sets and the discovery of f-hiding knowledge. Journal
of Systems Engineering and Electronics, 19(6):1171–1177, 2008.


# Page. 154

![Page Image](https://bcdn.docswell.com/page/L73WKWR575.jpg)

153
Bibliography
[207] Li Dongya, Ren Xuefang, and Shi Kaiquan. Rough law generation and its separation-recognition. Journal
of Systems Engineering and Electronics, 20(6):1239–1246, 2009.
[208] Takaaki Fujita. Metafuzzy, metaneutrosophic, metasoft, and metarough set, 2025.
[209] Yan-Ling Bao and Hai-Long Yang. On single valued neutrosophic refined rough set model and its application. In Fuzzy Multi-criteria Decision-Making Using Neutrosophic Sets, pages 107–143. Springer, 2018.
[210] Hu Zhao and Hong-Ying Zhang. A result on single valued neutrosophic refined rough approximation
operators. Journal of Intelligent &amp; Fuzzy Systems, 35(3):3139–3146, 2018.
[211] Irfan Deli. Refined neutrosophic sets and refined neutrosophic soft sets: theory and applications. In
Handbook of research on generalized and hybrid set structures and applications for soft computing, pages
321–343. IGI Global, 2016.
[212] Vakkas Uluçay. Some concepts on interval-valued refined neutrosophic sets and their applications. Journal
of Ambient Intelligence and Humanized Computing, 12(7):7857–7872, 2021.
[213] Vakkas Ulucay, Adil Kılıç, Memet Sahin, and Harun Deniz. A new hybrid distance-based similarity measure
for refined neutrosophic sets and its application in medical diagnosis. Infinite Study, 2019.
[214] Florentin Smarandache. n-valued refined neutrosophic logic and its applications to physics. Infinite study,
4:143–146, 2013.
[215] Anjan Mukherjee, Mithun Datta, and Abhijit Saha. Refined soft sets and its applications. Journal of New
Theory, 14:10–25, 2016.
[216] Faruk Karaaslan. Correlation coefficients of single-valued neutrosophic refined soft sets and their applications in clustering analysis. Neural Computing and Applications, 28(9):2781–2793, 2017.
[217] Xueyou Chen. On rough cubic sets. Annals of Fuzzy Mathematics and Informatics, 21(3):319–331, 2021.
[218] Gagandeep Kaur and Harish Garg. Cubic intuitionistic fuzzy aggregation operators. International Journal
for Uncertainty Quantification, 8(5), 2018.
[219] Muhammad Saqlain, Raiha Imran, and Sabahat Hassan. Cubic intuitionistic fuzzy soft set and its distance
measures. Scientific Inquiry and Review, 6(2):59–75, 2022.
[220] Young Bae Jun, Seok-Zun Song, and Seon Jeong Kim. Cubic interval-valued intuitionistic fuzzy sets and
their application in bck/bci-algebras. Axioms, 7(1):7, 2018.
[221] Young Bae Jun, Florentin Smarandache, and Chang Su Kim. Neutrosophic cubic sets. New mathematics
and natural computation, 13(01):41–54, 2017.
[222] SP Priyadharshini and F Nirmala Irudayam. P and R Order of Plithogenic Neutrosophic Cubic sets.
Infinite Study, 2021.
[223] Mumtaz Ali, Irfan Deli, and Florentin Smarandache. The theory of neutrosophic cubic sets and their
applications in pattern recognition. Journal of intelligent &amp; fuzzy systems, 30(4):1957–1963, 2016.
[224] Jianming Zhan, Madad Khan, Muhammad Gulistan, and Ahmed Ali. Applications of neutrosophic cubic
sets in multi-criteria decision-making. International Journal for Uncertainty Quantification, 7(5), 2017.
[225] Muhammad Gulistan, Ahmed Elmoasry, and Naveed Yaqoob. N-version of the neutrosophic cubic set:
application in the negative influences of internet. The Journal of Supercomputing, 77(10):11410–11431,
2021.
[226] Surapati Pramanik, Shyamal Dalapati, Shariful Alam, and Tapan Kumar Roy. Nc-todim-based magdm
under a neutrosophic cubic set environment. Information, 8(4):149, 2017.
[227] Ajay Mani. Granular directed rough sets, concept organization and soft clustering. ArXiv, abs/2208.06623,
2022.
[228] Ajay Mani and Sándor Radeleczki. Algebraic approach to directed rough sets. ArXiv, abs/2004.12171,
2020.
[229] Akın Osman Atagün and Hüseyin Kamacı. Strait soft sets and strait rough sets with applications in
decision making. Soft Computing, 27(20):14585–14599, 2023.
[230] Ajay Mani. Dialectical rough sets, parthood and figures of opposition. ArXiv, abs/1703.10251, 2017.
[231] Chih-Ching Hsiao, Chen-Chia Chuang, Jin-Tsong Jeng, and Shun-Feng Su. A weighted fuzzy rough sets
based approach for rule extraction. In The SICE Annual Conference 2013, pages 104–109. IEEE, 2013.
[232] Kuanyun Zhu, Jingru Wang, and Yongwei Yang. A study on z-soft fuzzy rough sets in bci-algebras. In
IAENG International Journal of Applied Mathematics, 2020.
[233] Jianming Zhan, Kuanyun Zhu, and Muhammad Irfan Ali. Study on z-soft fuzzy rough sets in hemirings.
J. Multiple Valued Log. Soft Comput., 30:359–377, 2018.
[234] Dliouah Ahmed and Binxiang Dai. Picture fuzzy rough set and rough picture fuzzy set on two different
universes and their applications. Journal of Mathematics, 2020.
[235] Hasan Dinçer, Serhat Yüksel, Alexey Mikhaylov, and Vera Ivanyuk. An integrated analysis for digital
financial assets and artificial intelligence-based financial management using ai-based neuro quantum picture
fuzzy rough sets and econometric modeling. Financial Innovation, 11(1):1–24, 2025.


# Page. 155

![Page Image](https://bcdn.docswell.com/page/87DK3KYYJG.jpg)

Bibliography
154
[236] Tanzeela Shaheen, Muhammad Irfan Ali, and Muhammad Shabir. Generalized hesitant fuzzy rough sets
(ghfrs) and their application in risk analysis. Soft Computing, 24:14005 – 14017, 2020.
[237] Haidong Zhang, Lan Shu, and Shilong Liao. Topological structures of interval-valued hesitant fuzzy rough
set and its application. J. Intell. Fuzzy Syst., 30:1029–1043, 2016.
[238] Haidong Zhang, Lan Shu, and Lianglin Xiong. On novel hesitant fuzzy rough sets. Soft Computing,
23:11357 – 11371, 2019.
[239] Rizwan Gul, Muhammad Shabir, and Ahmad N Al-Kenani. Covering-based (α, β )-multi-granulation
bipolar fuzzy rough set model under bipolar fuzzy preference relation with decision-making applications.
Complex &amp; Intelligent Systems, 10(3):4351–4372, 2024.
[240] Hai-Long Yang, Sheng-Gang Li, Shouyang Wang, and Jue Wang. Bipolar fuzzy rough set model on two
different universes and its application. Knowledge-Based Systems, 35:94–101, 2012.
[241] Faiza Tufail and Muhammad Shabir. Vikor method for mcdm based on bipolar fuzzy soft β -covering
based bipolar fuzzy rough set model and its application to site selection of solar power plant. Journal of
Intelligent &amp; Fuzzy Systems, 42(3):1835–1857, 2022.
[242] Anessa Arif, Aliya Fahmi, Aziz Khan, Aiman Mukheimer, Thabet Abdeljawad, and Rajermani Thinakaran.
Domination in bipolar fuzzy rough digraphs with applications to decision-making. European Journal of
Pure and Applied Mathematics, 18(4):6216–6216, 2025.
[243] Saba Ayub, Muhammad Shabir, Muhammad Riaz, Waqas Mahmood, Darko Bovanić, and Dragan
Marinković. Linear diophantine fuzzy rough sets: A new rough set approach with decision making. Symmetry, 14:525, 2022.
[244] Saba Ayub, Muhammad Shabir, and Rizwana Gul. Another approach to linear diophantine fuzzy rough
sets on two universes and its application towards decision-making problems. Physica Scripta, 98, 2023.
[245] Alhamzah Alnoor, A. A. Zaidan, Sara Qahtan, Hassan A. Alsattar, R. T. Mohammed, Khai Wah Khaw,
Mohammed Alazab, Teh Sin Yin, and Ahmed Shihab Albahri. Toward a sustainable transportation industry: Oil company benchmarking based on the extension of linear diophantine fuzzy rough sets and
multicriteria decision-making methods. IEEE Transactions on Fuzzy Systems, 31:449–459, 2023.
[246] Hilah Awad Alharbi and Kholood Mohammad Alsager. Fermatean m-polar fuzzy soft rough sets with
application to medical diagnosis. AIMS Mathematics, 10(6):14314–14346, 2025.
[247] Alicja Mieszkowicz-Rolka and Leszek Rolka. Variable precision fuzzy rough sets. Trans. Rough Sets,
1:144–160, 2004.
[248] Xiaohong Zhang, Qiqi Ou, and Jingqian Wang. Variable precision fuzzy rough sets based on overlap
functions with application to tumor classification. Inf. Sci., 666:120451, 2024.
[249] Dan Meng, Xiaohong Zhang, and Keyun Qin. Soft rough fuzzy sets and soft fuzzy rough sets. Computers
&amp; mathematics with applications, 62(12):4635–4645, 2011.
[250] Jianming Zhan, Muhammad Irfan Ali, and Nayyar Mehmood. On a novel uncertain soft set model: Z-soft
fuzzy rough set model and corresponding decision making methods. Appl. Soft Comput., 56:446–457, 2017.
[251] Qinghua Hu, Lei Zhang, Shuang An, David Zhang, and Daren Yu. On robust fuzzy rough set models.
IEEE Transactions on Fuzzy Systems, 20:636–651, 2012.
[252] Jilin Yang, Xianyong Zhang, and Keyun Qin. Constructing robust fuzzy rough set models based on
three-way decisions. Cognitive Computation, 14:1955 – 1977, 2021.
[253] Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Sumbal Ali, Fuad A Awwad, and Emad AA Ismail. qspherical fuzzy rough einstein geometric aggregation operator for image understanding and interpretations.
IEEE Access, 12:140380–140411, 2024.
[254] Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Fuad A Awwad, Sumbal Ali, and Emad AA Ismail.
Aggregation operators based on einstein averaging under q-spherical fuzzy rough sets and their applications
in navigation systems for automatic cars. Heliyon, 10(15), 2024.
[255] Maheen Sultan and Muhammad Akram. An extended multi-criteria decision-making technique for hydrogen and fuel cell supplier selection by using spherical fuzzy rough numbers. Journal of Applied Mathematics
and Computing, 71(2):1843–1886, 2025.
[256] Krassimir T Atanassov and Krassimir T Atanassov. Intuitionistic fuzzy sets. Springer, 1999.
[257] Zhiming Zhang. Generalized intuitionistic fuzzy rough sets based on intuitionistic fuzzy coverings. Inf.
Sci., 198:186–206, 2012.
[258] Zhan ao Xue, Li ping Zhao, Lin Sun, M. Zhang, and Tianyu Xue. Three-way decision models based on
multigranulation support intuitionistic fuzzy rough sets. Int. J. Approx. Reason., 124:147–172, 2020.
[259] Tahir Mahmood, Jabbar Ahmmad, Zeeshan Ali, and Miin-Shen Yang. Confidence level aggregation
operators based on intuitionistic fuzzy rough sets with application in medical diagnosis. IEEE Access,
11:8674–8688, 2023.
[260] Muhammad Kamraz Khan, Kamran, Muhammad Sajjad Ali Khan, Ahmad Aloqaily, and Nabil Mlaiki.
Covering-based intuitionistic hesitant fuzzy rough set models and their application to decision-making
problems. Symmetry, 16:693, 2024.


# Page. 156

![Page Image](https://bcdn.docswell.com/page/VJPK4K62E8.jpg)

155
Bibliography
[261] Sovan Samanta, Madhumangal Pal, Hossein Rashmanlou, and Rajab Ali Borzooei. Vague graphs and
strengths. Journal of Intelligent &amp; Fuzzy Systems, 30(6):3675–3680, 2016.
[262] Humberto Bustince and P Burillo. Vague sets are intuitionistic fuzzy sets. Fuzzy sets and systems,
79(3):403–405, 1996.
[263] B Surender Reddy, S Vijayabalaji, N Thillaigovindan, and P Balaji. A vague rough set approach for
failure mode and effect analysis in medical accident. International Journal of Analysis and Applications,
21:134–134, 2023.
[264] Karan Singh, Samajh Singh Thakur, and Mangi Lal. Vague rough set techniques for uncertainty processing
in relational database model. Informatica, 19(1):113–134, 2008.
[265] Chunxin Bo, Xiaohong Zhang, Songtao Shao, and Florentin Smarandache. Multi-granulation neutrosophic
rough sets on a single domain and dual domains with applications. Symmetry, 10:296, 2018.
[266] Abhik Mukherjee, Anjan Mukherjee, Rakhal Das, and Ajoy Kanti Das. A new approach of generalized
interval-valued neutrosophic rough set and its application in medical diagnosis. Transactions on Fuzzy
Sets and Systems, 5(1):115–134, 2026.
[267] Hu Zhao and Hongying Zhang. Some results on multigranulation neutrosophic rough sets on a single
domain. Symmetry, 10:417, 2018.
[268] Muhammad Kamran, Shahzaib Ashraf, and Muhammad Shazib Hameed. A promising approach with
confidence level aggregation operators based on single-valued neutrosophic rough sets. Soft Computing,
pages 1–24, 2023.
[269] Surapati Pramanik and Kalyan Mondal. Rough bipolar neutrosophic set. Global Journal of Engineering
Science and Research Management, 3(6):71–81, 2016.
[270] S Sudha, B Felcia Merlin, B Shoba, and A Rajkumar. Quadripartitioned neutrosophic probability distributions. International Journal of Neutrosophic Science (IJNS), 25(2), 2025.
[271] Zhi-Lian Guo, Yan-Ling Liu, and Hai-Long Yang. A novel rough set model in generalized single valued
neutrosophic approximation spaces and its application. Symmetry, 9(7):119, 2017.
[272] Hai-Long Yang, Yan-Ling Bao, and Zhi-Lian Guo. Generalized interval neutrosophic rough sets and its
application in multi-attribute decision making. Filomat, 32(1):11–33, 2018.
[273] Suman Das, Rakhal Das, and Binod Chandra Tripathy. Topology on rough pentapartitioned neutrosophic
set. Iraqi Journal of Science, pages 2630–2640, 2022.
[274] Babisha Julit RL et al. Pentapartitioned fermatean neutrosophic soft-rough set and its applications. South
East Asian Journal of Mathematics &amp; Mathematical Sciences, 21(2), 2025.
[275] Giirel Bozma and Uyesi Nazmye. Pentapartitioned rough fermatean neutrosophic normed spaces. International Science And Art Research, pages 191–205, 2022.
[276] Fazeelat Sultana, Muhammad Gulistan, Mumtaz Ali, Naveed Yaqoob, Muhammad Khan, Tabasam Rashid,
and Tauseef Ahmed. A study of plithogenic graphs: applications in spreading coronavirus disease (covid-19)
globally. Journal of ambient intelligence and humanized computing, 14(10):13139–13159, 2023.
[277] Muhammad Azeem, Humera Rashid, Muhammad Kamran Jamil, Selma Gütmen, and Erfan Babaee Tirkolaee. Plithogenic fuzzy graph: A study of fundamental properties and potential applications. Journal of
Dynamics and Games, pages 0–0, 2024.
[278] P Sathya, Nivetha Martin, and Florentine Smarandache. Plithogenic forest hypersoft sets in plithogenic
contradiction based multi-criteria decision making. Neutrosophic Sets and Systems, 73:668–693, 2024.
[279] Ruixuan Chen, Qun Liu, Yucang Zhang, and Wuyin Weng. A probabilistic plithogenic neutrosophic rough
set for uncertainty-aware food safety analysis. Neutrosophic Sets &amp; Systems, 90, 2025.
[280] Takaaki Fujita. Plithogenic rough sets. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, 152, 2025.
[281] Takaaki Fujita and Florentin Smarandache. A unified framework for u-structures and functorial structure:
Managing super, hyper, superhyper, tree, and forest uncertain over/under/off models. Neutrosophic Sets
and Systems, 91:337–380, 2025.
[282] James F Peters. Fuzzy sets, near sets, and rough sets for your computational intelligence toolbox. In
Foundations of Computational Intelligence Volume 2: Approximate Reasoning, pages 3–25. Springer, 2009.
[283] Yosr Ghozzi, Nesrine Baklouti, Hani Hagras, Mounir Ben Ayed, and Adel M Alimi. Interval type-2
beta fuzzy near sets approach to content-based image retrieval. IEEE Transactions on Fuzzy Systems,
30(3):805–817, 2021.
[284] S Anita Shanthi and R Valarmathi. Image processing on fuzzy near sets. In 2021 IEEE Madras Section
Conference (MASCON), pages 1–5. IEEE, 2021.
[285] S Anita Shanthi and R Valarmathi. Core on fuzzy near sets. Advances in Mathematics: Scientific Journal,
9(4):1521–1532, 2020.
[286] J Peters, A Meghdadi, and S Ramanna. Fuzzy metrics for near rough sets. Theoretical Computer Science,
2010.


# Page. 157

![Page Image](https://bcdn.docswell.com/page/2EVVXV3XEQ.jpg)

Bibliography
156
[287] Sheela Ramanna, Amir H Meghdadi, and James F Peters. Nature-inspired framework for measuring visual
image resemblance: A near rough set approach. Theoretical Computer Science, 412(42):5926–5938, 2011.
[288] ME Abd El-Monsef, AM Kozae, and MJ Iqelan. Near rough and near exact sets in topological spaces.
Indian Journal of Mathematics and Mathematical Sciences, 5(1):49–58, 2009.
[289] Irem Ucal Sari and Cengiz Kahraman. Intuitionistic fuzzy z-numbers. In International conference on
intelligent and fuzzy systems, pages 1316–1324. Springer, 2020.
[290] Abrar Hussain and Mazhar Ali. A critical estimation of ideological and political education for sustainable
development goals using an advanced decision-making model based on intuitionistic fuzzy z-numbers.
International Journal of Sustainable Development Goals, 1:23–44, 2025.
[291] Tingting Yang. A novel framework for ter allocation using multilayer perceptron and intuitionistic fuzzy
z numbers for talent management. Scientific Reports, 15(1):31491, 2025.
[292] Muhammad Kamran, Nadeem Salamat, and Muhammad Shazib Hameed. Sine trigonometric aggregation
operators with single-valued neutrosophic z-numbers: Application in business site selection. Neutrosophic
Sets and Systems, 63(1):18, 2024.
[293] Jun Ye. Similarity measures based on the generalized distance of neutrosophic z-number sets and their
multi-attribute decision making method. Soft Computing, 25:13975 – 13985, 2021.
[294] Mesut Karabacak. Correlation coefficient for neutrosophic z-numbers and its applications in decision
making. J. Intell. Fuzzy Syst., 45:215–228, 2023.
[295] Shigui Du, Jun Ye, Rui Yong, and Fangwei Zhang. Some aggregation operators of neutrosophic z-numbers
and their multicriteria decision making method. Complex &amp; Intelligent Systems, 7:429 – 438, 2020.
[296] Bindu Nila and Jagannath Roy. Analysis of critical success factors of logistics 4.0 using d-number based
pythagorean fuzzy dematel method. Decision Making Advances, 2(1):92–104, 2024.
[297] Nagarajan Deivanayagama Pillai, Bhuvaneswari Thangavel, Yasothei Suppiah, and Kanchanna Anbazhagan. A fusion-based framework for neutrosophic entropy and attribute weighting using linguistic d-numbers.
Big Data and Computing Visions, page e229588, 2025.
[298] Yuzhen Li and Yabin Shao. Fuzzy cognitive maps based on d-number theory. IEEE Access, 10:72702–72716,
2022.
[299] Dávid Nagy, Tamás Mihálydeák, and László Aszalós. Similarity based rough sets. In International Joint
Conference on Rough Sets, pages 94–107. Springer, 2017.
[300] Shivani Singh, Shivam Shreevastava, Tanmoy Som, and Gaurav Somani. A fuzzy similarity-based rough
set approach for attribute selection in set-valued information systems. Soft Computing, 24(6):4675–4691,
2020.
[301] Ahmed Hamed Attia, Ahmed Sobhy Sherif, and Ghada Samy El-Tawel. Maximal limited similarity-based
rough set model. Soft Computing, 20(8):3153–3161, 2016.
[302] Dávid Nagy. Similarity-based rough sets and its applications in data mining. In Transactions on Rough
Sets XXII, pages 252–323. Springer, 2020.
[303] Krzysztof Krawiec, Roman Słowiński, and Daniel Vanderpooten. Learning decision rules from similarity
based rough approximations. In Rough Sets in Knowledge Discovery 2: Applications, Case Studies and
Software Systems, pages 37–54. Springer, 1998.
[304] Tamás Mihálydeák. Logic on similarity based rough sets. In International Joint Conference on Rough
Sets, pages 270–283. Springer, 2018.
[305] Dávid Nagy, Tamás Mihálydeák, and László Aszalós. Similarity based rough sets with annotation. In
International Joint Conference on Rough Sets, pages 88–100. Springer, 2018.
[306] Sha Qiao, Ping Zhu, and Witold Pedrycz. Rough set analysis of graphs. Filomat, 36(10):3331–3354, 2022.
[307] Meilian Liang, Binmei Liang, Linna Wei, and Xiaodong Xu. Edge rough graph and its application. In
2011 Eighth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), volume 1,
pages 335–338. IEEE, 2011.
[308] Danyang Wang and Ping Zhu. Graph reduction in a path information-based rough directed graph model.
Soft Computing, 26(9):4171–4186, 2022.
[309] Weiping Ding, Bairu Pan, Hengrong Ju, Jiashuang Huang, Chun Cheng, Xinjie Shen, Yu Geng, and Tao
Hou. Rg-gcn: Improved graph convolution neural network algorithm based on rough graph. IEEE Access,
10:85582–85594, 2022.
[310] Shaik Noorjahan and Shaik Sharief Basha. The laplacian energy of an intuitionistic fuzzy rough graph and
its utilisation in decision-making. Operations Research and Decisions, 35, 2025.
[311] Muhammad Akram, Maham Arshad, and Shumaiza. Fuzzy rough graph theory with applications. Int. J.
Comput. Intell. Syst., 12:90–107, 2018.
[312] R Noor, I Irshad, and I Javaid. Soft rough graphs. arXiv preprint arXiv:1707.05837, 2017.
[313] Nasir Shah, Noor Rehman, Muhammad Shabir, and Muhammad Irfan Ali. Another approach to roughness
of soft graphs with applications in decision making. Symmetry, 10(5):145, 2018.


# Page. 158

![Page Image](https://bcdn.docswell.com/page/57GLVL3REL.jpg)

157
Bibliography
[314] Muhammad Akram, Hafsa M Malik, Sundas Shahzadi, and Florentin Smarandache. Neutrosophic soft
rough graphs with application. Axioms, 7(1):14, 2018.
[315] Florentin Smarandache, Sidra Sayed, Nabeela Ishfaq, and Muhammad Akram. Rough neutrosophic digraphs with application. Axioms, 7(5), 2018.
[316] Takaaki Fujita, Atiqe Ur Rahman, Arkan A Ghaib, Talal Ali Al-Hawary, and Arif Mehmood Khattak. On
the properties and illustrative examples of soft superhypergraphs and rough superhypergraphs. Prospects
for Applied Mathematics and Data Analysis, 5(1):12–31, 2025.
[317] Tong He, Yong Chen, and Kaiquan Shi. Weighted rough graph and its application. In Sixth International
Conference on Intelligent Systems Design and Applications, volume 1, pages 486–491. IEEE, 2006.
[318] He Tong, Xue Peijun, and Shi Kaiquan. Application of rough graph in relationship mining. Journal of
Systems Engineering and Electronics, 19(4):742–747, 2008.
[319] Setenay Akduman, Ahmet Zemci Özçelik, and Cenap Özel. Rough topology on covering-based rough sets.
International Journal of Computational Systems Engineering, 2(2):107–111, 2015.
[320] Arun Kumar and Shilpi Kumari. Rough sets: A topological view. New Mathematics and Natural Computation, 21(02):661–675, 2025.
[321] Muhammad Riaz, Florentin Smarandache, Atiqa Firdous, and Atiqa Fakhar. On soft rough topology with
multi-attribute group decision making. Mathematics, 7(1):67, 2019.
[322] Nof Alharbi, Hassen Aydi, Cenap Özel, and Selçuk Topal. Rough topologies on classical and based covering
rough sets with applications in making decisions on chronic thromboembolic pulmonary hypertension.
International Journal of Intelligent Engineering Informatics, 8(3):173–185, 2020.
[323] Duoqian Miao, Suqing Han, Daoguo Li, and Lijun Sun. Rough group, rough subgroup and their properties.
In International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing, pages
104–113. Springer, 2005.
[324] Changzhong Wang and Degang Chen. A short note on some properties of rough groups. Computers &amp;
mathematics with applications, 59(1):431–436, 2010.
[325] Gülay Oğuz, Ilhan Içen, and M Habil Gürsoy. Lie rough groups. Filomat, 32(16):5735–5741, 2018.
[326] TMG Ahsanullah. Rough uniformity of topological rough groups and l-fuzzy approximation groups. Journal
of Intelligent &amp; Fuzzy Systems, 43(1):1129–1139, 2022.
[327] William Zhu and Shiping Wang. Rough matroid. In 2011 IEEE International Conference on Granular
Computing, pages 817–824. IEEE, 2011.
[328] Jingqian Wang and William Zhu. Rough sets and matroidal contraction. arXiv preprint arXiv:1209.5482,
2012.
[329] Bin Yang, Hong Zhao, and William Zhu.
arXiv:1311.0351, 2013.
Rough matroids based on coverings.
arXiv preprint


# Page. 159

![Page Image](https://bcdn.docswell.com/page/4EQY6Y5YJP.jpg)

Handbook of Rough Set Extensions and Uncertainty Models
provides a systematic and structured survey of rough set theory
and its major extensions developed to model uncertainty,
vagueness, granularity, and indiscernibility in data-driven
systems. Rooted in Pawlak’s original approximation framework,
rough set theory distinguishes between what can be determined
with certainty and what remains only possible under limited
information. Over time, numerous generalizations have emerged
to address increasingly complex forms of uncertainty.
This handbook serves as a comprehensive “model map” of the
rough-set landscape. It organizes and analyzes a wide spectrum
of variants, including equivalence-based, tolerance-based,
covering-based, neighborhood-based, probabilistic, weighted,
multi-granulation, hierarchical, and sequential rough sets. In
addition, hybrid uncertainty models—such as fuzzy rough sets,
intuitionistic fuzzy rough sets, neutrosophic rough sets,
plithogenic rough sets, and other generalized formulations —are
presented within a unified conceptual taxonomy.


