ViewVC Help
View File | Revision Log | Show Annotations | Revision Graph | Root Listing
root/i-scream/projects/cms/documentation/specification/spec-realtime.txt
Revision: 1.4
Committed: Tue Oct 31 16:12:25 2000 UTC (24 years ago) by ajm
Content type: text/plain
Branch: MAIN
CVS Tags: PROJECT_COMPLETION, HEAD
Changes since 1.3: +38 -12 lines
Log Message:
Added information about the CORE and what services it provides, together with information about how the other components make use of it.

File Contents

# Content
1 I-Scream Specification Outline (Realtime side only)
2 ===================================================
3
4 ajm4, 30/10/2000
5 tdb1, 30/10/2000
6
7 System Component Startup
8 ************************
9
10 CORE
11 ----
12 The core of the system provides little or no functionality
13 to the operation of the system, but instead oversees the
14 running. At startup this should be the first component to
15 intansiate. It essentially acts as a central logging and
16 configuration distrubution site, the "central" in
17 centralised monitoring system. It may also be running the
18 ORB or some components related to it.
19
20 On startup the first thing it should do is read in any
21 configuration files, start the logging interface then
22 prepare to bring the system online. This is done by
23 starting the various components. If however the system
24 configuration states that particular components are
25 operating in "distributed" mode, then it blocks until
26 the various key components have registered that they are
27 online.
28
29 Client Interface
30 ----------------
31 The Client Interface is essentially just one component with
32 a series of lists within it. When run it should, obviously,
33 create an instance of the Client Interface, and then bind
34 this to the ORB and register with the naming service.
35
36 It can then read its configuration in from the CORE and get
37 a hook on the logging service that the CORE provides.
38
39 It then needs to construct the "local clients". These
40 clients communicate with the system using the same interface
41 as the external clients, but they are tailored to specific
42 purposes, such as E-Mail alerts, and SMS alerts. The Client
43 Interface then listens on a "well known" address for clients
44 to request a connection.
45
46 Filter
47 ------
48 The filter is broken down into three main subcomponents.
49
50 - Filter Manager
51 The Filter Manager is responsible for managing which
52 filters are used by the hosts. The Filter Manager is
53 available at a "well known" location which is pre-
54 programmed into the hosts. The Filter Manager is
55 responsible for creating and managing the other
56 components of the filter system.
57
58 - Main Filter
59 The Main Filter is the single point that links back
60 into the CORE of the system. It will connect to the
61 DBI and the CLI to deliver data.
62
63 - Filters
64 There can be multipler Filters, and these are the
65 "front line" to the hosts. They all link back to the
66 Main Filter to send data into the system. It is
67 possible to run these Filters on any machine, allowing
68 management of data flow.
69
70 At startup a Filter Manager object is activated at the "well
71 known" location (probably a given machine name at a
72 predefined port). The Filter Manager will create an instance
73 of the Main Filter, and any Filters under it's control. It
74 should also bind itself to the ORB and register with the
75 naming service.
76
77 It can then read its configuration in from the CORE and get
78 a hook on the logging service that the CORE provides.
79
80 Through some mechanism the other Filters, elsewhere on the
81 network, will register with the Filter Manager. The Filter
82 Manager will need to tell each Filter the location of the
83 Main Filter upon registering. The Filter Manager will then
84 be in a position to receive connections from hosts and pass
85 them off to Filters.
86
87 System Running State
88 ********************
89
90 CORE
91 ----
92 Once the various components are running then the core is
93 essentially idle, logging information and handling
94 configuration changes.
95
96 Client Interface
97 ----------------
98 In the running state the Client Interface is always
99 listening for clients on the "well known" address. When a
100 connection is received it is passed in to the main Client
101 Interface and the client is queried about which hosts it
102 wishes to receive information about. This is then stored in
103 an internal "routing table" so the Client Interface knows
104 which hosts to send the information on to. This routing
105 table is constructed with this form;
106
107 host1: client1 client2 client5
108 host2: client2
109 host3: client3 client4
110 host4: client1 client3
111
112 This design is such that when a piece of information is
113 recieved from a host the Client Interface can immediately
114 see which clients wish to receive this data, without too
115 much searching.
116
117 The "local clients" function just like any other client,
118 although they are local, in that they will wish to receive
119 information about hosts they are interested in. However,
120 they will contain a lot more logic, and be required to work
121 out who wants to be alerted about what, and when. They will
122 also be responsible for sending the alert.
123
124 Filter
125 ------
126 When a host first loads up it knows where to locate the
127 Filter Manager because it's located at a "well known"
128 location. The host will fire up a TCP connection to the
129 Filter Manager to announce itself. The Filter Manager will
130 use some method (logically) to allocate a Filter to the
131 host. The Filter Manager should base this decision on
132 various factors, such as the load on the selection of
133 filters, and possibly the location in relation to the host.
134 The host will then be directed to this Filter for all
135 further communications.
136
137 As the system runs the host will send data with (maybe) UDP
138 to the Filter (that it's been allocated to). This choice has
139 been made because it puts less onus on the host to make the
140 connection, rather the data is just sent out. However, to
141 ensure that the data isn't just disappearing into the depths
142 of the network a periodic heartbeat will occur (at a
143 predefined interval) over TCP to the Filter. This heartbeat
144 can be used as a form of two-way communication, ensuring
145 that everything is ok, and if required, to send any
146 information back to the host. This heartbeat must occur
147 otherwise the server may infer the host has died.
148
149 This could link in to alerting. An amber alert could be
150 initiated for a host if the server stops receiving UDP
151 packets, but an red alert be raised if the heartbeat doesn't
152 occur.
153
154 If, for some reason, the Filter were to disappear the host
155 should fall back on it's initial discovering mechanism - ie.
156 contacting the Filter Manager at it's "well known" location.
157 The host should report that it's lost it's Filter (so the
158 Filter Manager can investigate and remove from it's list of
159 Filters), and then the Filter Manager will reassign a new
160 Filter to the host. Communication can then continue.
161
162 The idea of plugins to the Filters has been introduced.
163 These plugins will implement a predefined plugin interface,
164 and can be chained together at the Filter. Using the
165 interface we can easily add future plugins that can do
166 anything from parsing new data formats, to implementing
167 encryption algorithms. The Filter will pass incoming data to
168 each plugin in turn that it has available, and then finally
169 pass the data on to the Main Filter. The Filter need not
170 have any real knowledge about the content of the data.