ViewVC Help
View File | Revision Log | Show Annotations | Revision Graph | Root Listing
root/i-scream/projects/cms/documentation/specification/spec-realtime.txt
(Generate patch)

Comparing projects/cms/documentation/specification/spec-realtime.txt (file contents):
Revision 1.1 by ajm, Mon Oct 30 16:49:16 2000 UTC vs.
Revision 1.3 by tdb, Mon Oct 30 21:45:33 2000 UTC

# Line 2 | Line 2 | I-Scream Specification Outline (Realtime side only)
2   ===================================================
3  
4   ajm4, 30/10/2000
5 + tdb1, 30/10/2000
6  
7   System Component Startup
8 < ------------------------
8 > ************************
9  
10   CORE
11 < ****
11 > ----
12  
13  
14   Client Interface
15 < ****************
15 > ----------------
16 > The Client Interface is essentially just one component with
17 > a series of lists within it. When run it should, obviously,
18 > create an instance of the Client Interface, and then bind
19 > this to the ORB and register with the naming service. It
20 > then needs to construct the "local clients". These clients
21 > communicate with the system using the same interface as the
22 > external clients, but they are tailored to specific
23 > purposes, such as E-Mail alerts, and SMS alerts. The Client
24 > Interface then listens on a "well known" address for clients
25 > to request a connection.
26  
16
27   Filter
28 < ******
28 > ------
29 > The filter is broken down into three main subcomponents.
30  
31 +  - Filter Manager
32 +      The Filter Manager is responsible for managing which
33 +      filters are used by the hosts. The Filter Manager is
34 +      available at a "well known" location which is pre-
35 +      programmed into the hosts. The Filter Manager is
36 +      responsible for creating and managing the other
37 +      components of the filter system.
38 +  
39 +  - Main Filter
40 +      The Main Filter is the single point that links back
41 +      into the CORE of the system. It will connect to the
42 +      DBI and the CLI to deliver data.
43 +  
44 +  - Filters
45 +      There can be multipler Filters, and these are the
46 +      "front line" to the hosts. They all link back to the
47 +      Main Filter to send data into the system. It is
48 +      possible to run these Filters on any machine, allowing
49 +      management of data flow.
50 +
51 + At startup a Filter Manager object is activated at the "well
52 + known" location (probably a given machine name at a
53 + predefined port). The Filter Manager will create an instance
54 + of the Main Filter, and any Filters under it's control. It
55 + should also bind itself to the ORB and register with the
56 + naming service. Through some mechanism the other Filters,
57 + elsewhere on the network, will register with the Filter
58 + Manager. The Filter Manager will need to tell each Filter
59 + the location of the Main Filter upon registering. The Filter
60 + Manager will then be in a position to receive connections
61 + from hosts and pass them off to Filters.
62 +
63   System Running State
64 < --------------------
64 > ********************
65  
66   CORE
67 < ****
67 > ----
68  
69  
70   Client Interface
71 < ****************
71 > ----------------
72 > In the running state the Client Interface is always
73 > listening for clients on the "well known" address. When a
74 > connection is received it is passed in to the main Client
75 > Interface and the client is queried about which hosts it
76 > wishes to receive information about. This is then stored in
77 > an internal "routing table" so the Client Interface knows
78 > which hosts to send the information on to. This routing
79 > table is constructed with this form;
80  
81 +  host1: client1 client2 client5
82 +  host2: client2
83 +  host3: client3 client4
84 +  host4: client1 client3
85  
86 + This design is such that when a piece of information is
87 + recieved from a host the Client Interface can immediately
88 + see which clients wish to receive this data, without too
89 + much searching.
90 +
91 + The "local clients" function just like any other client,
92 + although they are local, in that they will wish to receive
93 + information about hosts they are interested in. However,
94 + they will contain a lot more logic, and be required to work
95 + out who wants to be alerted about what, and when. They will
96 + also be responsible for sending the alert.
97 +
98   Filter
99 < ******
99 > ------
100 > When a host first loads up it knows where to locate the
101 > Filter Manager because it's located at a "well known"
102 > location. The host will fire up a TCP connection to the
103 > Filter Manager to announce itself. The Filter Manager will
104 > use some method (logically) to allocate a Filter to the
105 > host. The Filter Manager should base this decision on
106 > various factors, such as the load on the selection of
107 > filters, and possibly the location in relation to the host.
108 > The host will then be directed to this Filter for all
109 > further communications.
110  
111 + As the system runs the host will send data with (maybe) UDP
112 + to the Filter (that it's been allocated to). This choice has
113 + been made because it puts less onus on the host to make the
114 + connection, rather the data is just sent out. However, to
115 + ensure that the data isn't just disappearing into the depths
116 + of the network a periodic heartbeat will occur (at a
117 + predefined interval) over TCP to the Filter. This heartbeat
118 + can be used as a form of two-way communication, ensuring
119 + that everything is ok, and if required, to send any
120 + information back to the host. This heartbeat must occur
121 + otherwise the server may infer the host has died.
122  
123 + This could link in to alerting. An amber alert could be
124 + initiated for a host if the server stops receiving UDP
125 + packets, but an red alert be raised if the heartbeat doesn't
126 + occur.
127  
128 + If, for some reason, the Filter were to disappear the host
129 + should fall back on it's initial discovering mechanism - ie.
130 + contacting the Filter Manager at it's "well known" location.
131 + The host should report that it's lost it's Filter (so the
132 + Filter Manager can investigate and remove from it's list of
133 + Filters), and then the Filter Manager will reassign a new
134 + Filter to the host. Communication can then continue.
135 +
136 + The idea of plugins to the Filters has been introduced.
137 + These plugins will implement a predefined plugin interface,
138 + and can be chained together at the Filter. Using the
139 + interface we can easily add future plugins that can do
140 + anything from parsing new data formats, to implementing
141 + encryption algorithms. The Filter will pass incoming data to
142 + each plugin in turn that it has available, and then finally
143 + pass the data on to the Main Filter. The Filter need not
144 + have any real knowledge about the content of the data.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines