Java Development News:
Requirements for Building Industrial Strength Web Services
By Billy Newport
01 Jul 2001 | TheServerSide.com
There is a glossary at the end of the article explaining any acronyms.
Vertical and Horizontal Applications
A basic change has been occurring in the way companies develop their applications, which has been a catalyst for today's need for Web Services. In the past, the focus of IT development has been on vertical applications.
- A vertical application is one that covers a single aspect of the business.
- A horizontal application is one that covers many aspects of the business.
Historically, vertical applications have been easier to implement than horizontal, enterprise-wide applications. Companies are now trying to turn themselves into an eBusiness where a virtual application will represent their entire portfolio of vertical and horizontal applications. However, neither Web Services nor J2EE, used alone or together, is sufficient to do this type of integration.
The Internet: The Ultimate Suite of Vertical Applications
When we look at B2B problems, we see that they are very similar to the problem of integrating the ultimate set of vertical applications. Each company's infrastructure was developed independently of other companies' applications. B2B can be viewed as a joining of these companies' applications so that they work as one larger application. Web Services (UDDI, SOAP, WSDL) are being pushed as the way to make this happen, but at the moment, they are unable to provide a solution to this problem. Why? At present, Web Services are just a set of specifications:
- SOAP. A specification for XML RPCs over a transport.
- WSDL. A specification for describing interfaces or endpoints.
- UDDI. A specification for a directory implementation.
- WSFL. A specification for describing business processes.
A service broker will implement all of these and more. Service Brokers will be the middleware behind these WebServices specifications.
UDDI, SOAP, WSDL is not enough to build real Web Services
Implementing a thin SOAP/WSDL/UDDI layer on top of your typical J2EE application is not enough to build real Web Services. Trivial Web Services can be built that way, but a lot more infrastructure is needed for companies to create horizontal Web Services applications that span their enterprise.
Message Brokers are now just part of the puzzle
Messaging systems were the original middleware for enterprise companies. Products such as DEC MessageQ and IBM MQ Series were the main high-end means of making different applications communicate. Next in the evolutionary chain came message brokers. Message Brokers were able to take messages and apply logic to them before forwarding the message to the destination application. The logic would filter, transform, and possibly enrich the message before forwarding. It might also implement content- based routing which involves an examination of the message content and a subsequent forwarding of the message, based on the examination, to one or more of a set of destinations. Different logic may have to be applied based on the destination. Today's world is developing a level of complexity that message brokers are not designed to address. Today, it's not about connecting application A to application B; it's about taking a set of applications, applying a business process flow to control the integration, and exposing this aggregate application using whatever middleware is appropriate.
Example: Online Insurance Broker.
We will use an online insurance broker as an example throughout the article. The broker's business
is to receive requests for quotations (RFQ) from clients. It then contacts a set of insurers and
presents the best 3 quotes from those insurers to the client. If the client accepts one, the broker
gets a commission.
The company currently has a system that is connected to two insurers. The broker is at a disadvantage because adding another insurer to its current application is very expensive and they are looking at developing a new application that allows internet clients and telephone clients to request quotes.
The broker would also like to be able to support a new standard interface to insurers. This interface is Web Service based. The insurers association has standardized an XML Schema specification that describes how a "request for quote", "quote", "accept quote" can be specified using XML and it has also standardized a WSDL definition for talking to an insurer. The broker wants to build the new application on a Web Services architecture to take advantage of this.
Once it is implemented, the broker can send RFQs to a set of insurers electronically using WSDL and SOAP; it can then provide these quotes to the client. If the client accepts one, the broker sends an "accept quote" document to the insurer who handles the rest of the procedure, and sends the broker its commission.
Previously, the broker didn't support the internet but they see the cost savings of allowing clients to fill forms online, get the quotes, and accept one as compared to the old method of phoning a call center rep. The internet would also allow 24/7 quotations whereas the current call center is only open from 9 to 6pm. We will see how a service broker can help this company implement such a system as we progress through the example.
Service Broker: The Platform for Web Services
A Service Broker is used to make a company's heterogeneous vertical applications work together and to provide a simplified interface to external systems using a variety of middleware. In a nutshell, a Service Broker allows a company to combine all of its assets into one horizontal application that it can expose to the outside world as a Web Service. Examples of the first implementations of Service Brokers are applications such as Extricity, Web Methods, SilverStream Extend, IBM B2Bi, and BEA Process Integrator/Collaborator. These products are currently marketed as B2B products but are really Service Brokers underneath.
A Service Broker is made up of the following components (each part will be explained later in the article):
- A Business Process Manager (BPM) Component. This is a workflow type component that
allows business processes to be defined. It acts as a coordinator for interactions between
multiple applications. Applications and humans (through a work list management API) can be
interfaced to this component. WSFL may be the language for defining these business process
- Middleware connectors.
It should support a variety of input and output connectors. The
connectors are used by external applications that need to invoke a service,
communicate with applications involved in the business process, and communicate
events or responses back to the external application. Examples of connectors would
be: RMI/IIOP, SOAP, JMS, CICS, IMS, any JCA supported EIS, or IIOP. The interfaces
and middleware implementation of an interface exposed by a connector can be described
- Content-based Routing and Transformations for messages.
Message-based connector's will also support simple flows that allow content based
routing and message translation/transformation/enrichment.
- Security Mapping.
Multiple applications and especially external applications will use different security
schemes. A service broker needs to provide credential/role mapping and authorization
across all the involved components. This greatly simplifies programming and isolates
security issues to a single component.
- Process State Management.
These new aggregate applications may need to store the state about the process.
- Connector Discovery Mechanisms.
Currently, UDDI is the best-known example of such a facility. It implements white
(lookup by name), yellow (lookup by type), and green (lookup by interface supported)
searches over the available registered connectors.
- Transaction Monitor. When integrating multiple applications, it may be necessary to have transactions involving more than one application. A transaction monitor is an essential piece of software that makes this happen.
Each of these topics will now be addressed separately to re-enforce the definitions:The Business Process Manager (BPM) Component
The Business Process Manager Component looks like a traditional workflow engine but it is designed for the tasks of application integration as well as traditional human workflow-type tasks. Performance and availability should also be key features. Scalability of the modeling/development tools is another key factor. The tooling should be able to handle large numbers of processes in a manageable fashion and integrate with corporate versioning tools such as Clear Case from Rational, Continuous, CVS, and PVCS.
- Our example can use a workflow to model our typical use case, "Getting a quote".
The workflow is associated with the client's user id. The workflow consists of gathering
the client's details, transmitting them to the insurers, and as responses arrive, adding
them to the given transaction's data store. The client can accept a quote, modify the
details, or cancel the transaction. Some quotes from insurers may be only valid for a
period of time so the workflow engine will remove them automatically upon expiry of
the quote. However, the workflow keeps track of the state of a user transaction
that may span 4 weeks. When the user logs in to the web site, he can see all
workflows owned by him and can interact with those flows in the ways described.
The workflows show the user the status of the outstanding or completed quotes.
Our web-based application needs an API to create workflows and store a reference to them in the user's data store. It also needs to query the status of workflows and allow the user to update the workflow (accept quote/cancel transaction/modify details).
Aids the componentization process
Workflows can be composed of workflows. A workflow can be thought of as a reusable component. It encapsulates a specific business process. Workflows have interfaces, an input, an output and events that it generates and that it may receive during the lifecycle of the workflow.
Business Process Modeling and Activity Agents
We define a business process using the designer portion of this tool. We specify start states and end states. We specify activities using a graph connecting these start and end points.
These activities are usually implemented using an agent. These agents can be legacy applications, J2EE EJBs, or humans. Legacy applications need to be integrated with the BPM using the appropriate middleware, not always RMI/IIOP. J2EE EJBs can be integrated using JMS or remote calls but the 'best' way is to embed a J2EE container in the BPM system and then deploy your beans to a container hosted by the BPM service. This gives the best performance, as the bean invocation is a local call for the BPM engine rather than a remote one.
A connector framework is used to 'plug in' these activity agents. This is described in the next section.
BPM can be used for processes like user registration/deregistration, campaign management, content management/reviewing, order processing, and returns management among others. It can also be used to co-ordinate several applications so that use cases that span the applications can be implemented relatively easily.
People are "handled" using work lists. A work list is a set of activities selected from the total list of available activities that the BPM system allows an individual to perform. Role-based permissions determine this subset. When an activity is modeled, the business process modeler specifies who can process the activity. This usually means the BPM tools need to model an organization and specify who in that organization should be able to see/process/administer this activity. This organization model should be able to integrate with existing corporate directories (a weakness of both MQ Workflow and BEA Process Integrator which insist on having an internal copy of this directory). Humans then use a standard or custom GUI to check out (accept) an activity, work it on and then check it in (i.e., complete it) so that the business process can continue.
Standards such as Web Services Workflow (WSFL), BPML (for defining flows), and BPQL (for querying active business processes from the BPM database) should be useful for defining such BPM engines.
- Our example has people activities: Accept quote, cancel transaction, gather user details, modify user details. These are all basically form-based screens that gather information from the user or allow the user to influence the workflow (accept quote).
Making the services defined by these business processes accessible to a variety of external/internal triggers and integrating applications into the above BPM activities requires the ability to support a wide variety of middleware. Message brokering capabilities (message transformation, enrichment) should be built-in and useable regardless of the middleware chosen (these transform/enrich operations are useful even with RMI/IIOP middleware as an adapter may be required). Transactional support across the service broker's implementation and external applications are also important. Security is also a big point. We won't always receive connections using the middleware (J2EE server) that the broker is built on. This means that the connection needs to be authenticated and then authorized. The connector framework needs to support these operations.
What is a connector?
A connector is defined as an interface implemented with a special protocol or a network endpoint. Whenever a service needs to interact with an external entity (whether it's on a corporate LAN or over the internet), we implement this using a connector. The service broker should allow these connectors to be implemented using a variety of middleware products. These connectors will be advertised using the connector discovery functionality provided by the service broker. This discovery component will probably be implemented using UDDI and the connectors will be implemented using WSDL.
Clients of a service will use connectors to start services and to interact with a running service when required. Connectors are used not only by the applications that start the service but also by the components that comprise the service. These components are interfaced with the service broker using connectors.
Some connectors are public and others are private to the service implementers. The connector will probably need access to a common security infrastructure. We describe such an infrastructure later in the article. It may require access so that it can perform authentication/authorization of the incoming/outgoing events.
- Our example has several types of connectors. Some insurers implement WSDL/SOAP while others implement WSDL/Email. We also need a manual connector where a person sees the message and physically calls an insurer for the quote which is then keyed into the system; just because insurers aren't online doesn't mean they can't offer a competitive quote so we need to accommodate these also. The application that implements this connector could itself be connected to the main application using WSDL/SMTP for example, as it may be a legacy AS/400 green screen application (the old application). The broker didn't want to redevelop this application (it already interfaces to two insurers who have not implemented the new WSDL based interface).
Connectors can be built on top of J2EE
J2EE application servers probably offer the best underlying infrastructure for implementing a container for these connectors. J2EE servers offer JTA for transactions, JCA for integrating legacy EIS systems, and JMS for messaging. In addition, many new applications are being developed using J2EE servers. Tooling for this environment is also good and getting better. J2EE skills are now becoming more prevalent and this should lower the cost of implementing solutions based on J2EE over proprietary solutions.
Web Services as a connector framework
Web Services can be viewed as a framework for connectors. Web Services is a protocol-independent mechanism for advertising connectors as well as for publishing the details of how to interact with the connectors. It provides a very flexible discovery mechanism with UDDI (Universal Description, Discovery Integration). UDDI allows a client to find an interface/service/endpoint implemented with a specific protocol. A connector/endpoint is defined as an interface implemented with a specific protocol and is specified using WSDL as the definition language.
For example, we want to implement a "Get city weather" connector. This connector is used to trigger the "Get City Weather" service. We list this connector in UDDI. We then add all the protocols we support for this connector. The following are examples of these protocols:
- Telephone number!
Yep, it's a valid interface if you've got a call center. We would create a B2C application
that interacts with the service broker and the operator would basically interface the caller
with the service broker.
Fat clients may prefer to use this.
- SOAP/HTTP. Internet clients/applets may prefer this.
There are some truly paranoid people out there who want to keep their obsession with the
weather a secret.
SOAP requests sent via email.
SOAP requests sent over a message transport.
- IIOP. A straight Corba client may want this. Non-Java clients on any platform from Cobol, RPG to C/C++ and Java can use this to talk with our server.
We don't just want external clients to be able to access our connectors. A large company will have internal users of these connectors also.
UDDI will replace the vendor specific JNDI directories that are used to find EJBs in most J2EE servers today or we can expect J2EE vendors to add UDDI facilities to their JNDI (White Pages equivalent) implementation.
WSDL toolkits should come with generators to produce stubs for Java, C/C++, Corba IDL, EJB, COM, SOAP/HTTP, and SOAP over email. These generated stubs, both server and client-side, should let us plug anything we want into the connector framework. Web Services gives us a way to expose connectors that start/stop services and any intermediate connectors required by our services. These intermediate connectors are needed to publish events generated by the service broker or to receive external events.
Connectors need credential mapping
When a connector attaches a system using a different middleware than the one the service broker is implemented in, we need credential mapping. Suppose the service broker is implemented using WebSphere. We want a connector to invoke logic in a legacy system. We need to authenticate with the legacy system the user that is logged intoWebSphere. This will involve a mapping of the WebSphere user to a user in the legacy system. All WebSphere users may be mapped to a single credential in the legacy system, or WebSphere users can be groups and mapped to a single user per group, or each WebSphere user can have a specific identity on the legacy system. This is further explored later in the Security Mapping section.
- In our example, we need to authenticate with several systems. The ones using WSDL/Email use SMIME digital signatures, the ones using WSDL/HTTP use SSL and PKI. We need to store any passwords/credentials securely in an approved container so that the risk of having these credentials stored persistently is controlled.
Content Based Routing/Transformation/Enrichment
When connectors pull or push a message from/to an external system the message may need to be translated/transformed/enriched:
- A message is translated into different character sets/languages. Examples include
ASCII/UTF to EBCDIC or US ASCII to UTF-16.
- A message is transformed when its structure is altered. We may rearrange or pad the
message by converting it to an XML document that the external system understands.
- A message is enriched when its content is enhanced using business logic. We may extract pieces of the message and process them using logic. Examples would be changing a product id to the ones used by a remote system, converting between currencies or adding a field containing the taxable amount.
We may examine the content to decide what types of transformations are necessary. We may even send the message to a different destination depending on the content. We can, upon receiving a message, examine it to determine what the next stage of the processing pipeline is.
These types of operations are necessary whenever a message enters or leaves our services broker. They allow the service brokers to become more decoupled from the agents and external systems that they are connected to.
XML is the expected standard message format moving forward, but the system should also be able to accept packed messages. Encryption/decryption and compression should be treated as transformations to be applied.
If digital signatures are used, or the message is encrypted (and therefore opaque), then these operations cannot be performed without either compromising the signature, or allowing these applications access to keys to perform decryption.
- In our example, the WSDL interface for insurers will evolve over time. We need to accommodate multiple versions depending on which insurer we connect to. This layer should isolate the rest of the system from these variant connectors and allow us to upgrade an insurer from WSDL/Email to WSDL/HTTP without impacting the rest of the system.
Security is extremely important here. Applications making calls to service connectors need to be authenticated and should only be able to interact with the service connectors that they are authorized for. Activities and their associated agents also need mutual authentication between the BPM and the agent. Human activities also need an authentication/authorization mechanism. All these security mechanisms need to be integrated with existing directory and authorization services. LDAP directories are typical in corporations for authentication and limited authorization, but there is also a need for authorization servers that allow roles to be encoded in a single place. These authorization servers can then be leveraged by different middleware/server products to avoid duplicating authorization logic. IBM's Policy Directory product is an example of such authorization servers and is currently being integrated into most IBM middleware products.
The complexity of security integration shouldn't be underestimated. Getting it wrong and being hacked can be quite costly. As companies expose their internal applications via service brokers to the outside world, this becomes an absolutely critical piece of functionality that must be addressed and made as easy to use as possible.
When two systems are involved that both implement security, and we need to authenticate a third party that needs access to both systems, then we usually have a problem. The systems are probably using their own independent credential database. We may be using a third to authenticate the third party. We need a way to allow the third party access to these two systems. Some solutions could be:
- Use a single trusted ID for all third parties to access each system. This works but has
problems. Among them is that everybody has the same rights in the system and this may not be
- Have multiple trusted Ids based on role. This allows us to give third party people access to a system using an ID that is only compatible with their role. However, even with this we still need to add auditing so that we can track who did what as Ids are shared between users.
The last scheme can be generalized as a credential vault. This is basically a secure database
of credentials. When a user authenticates with system A and needs access to system B, we look
up the credentials for B using a compound key consisting of
J2EE servers use a simple role as their access control mechanism. If a user has a role then the user can invoke a method/servlet. This is much too simple for a lot of applications. These applications may need to check a database and/or use logic to see what the user can and can't do. An example would be if the user's account is overdrawn then other security rules apply.
When we have multiple systems (not all J2EE) that must implement these rules, we can have problems. System A implements the rules a different way than B and we now have a potential security breach. This authorization logic needs to be abstracted away from the application and made accessible to any applications that need it.
A good implementation base for such a server would be a J2EE server. Beans can easily call code in the server when it is built on the same J2EE server. The authentication server can then access any databases/session beans needed to determine whether the specified user has a specific right.
Such a server would probably look like a rules engine that can also be extended with business logic. A J2EE server is probably a good choice for implementing such a system. It may also be a good idea if this server can be co-located with an application server for performance reasons. It should also have high performance as you can expect it to have a high load and it will usually be present in the use case critical path.
Container Authorization Provider Contract JSR in progress
There is currently a JSR active to specify a contract for integrating a third party authorization server with a J2EE container. This is a good thing but as already noted, this coarse grained approach isn't always sufficient and a further API/JSR may be needed for querying the fine-grained authorization checks also. You can view the JSR #115 at http://www.j2ee.com/aboutJava/communityprocess/jsr/jsr_115_authorization.html
All this is no good if the connectors that comprise your service cannot be located. The connector discovery component allows service connectors to advertise themselves so that clients of the service can locate them. The best known standard for this currently is UDDI. Each connector in a service should register with the connector discovery component. It should register its name, its type and a set of interface/protocol combinations. The interface and protocol will be defined using WSDL.
UDDI, what is it?
UDDI is basically 3 different searchable views on this connector database. UDDI currently implements searches against these views using an API that is currently implemented using SOAP/HTTP.
- White Pages.
Here, connectors can be found using their name. This is like finding your favorite plumber in
the phone book. You know the plumbers name so you look him up using his name.
- Yellow Pages.
Here we just know that we need a plumber. So, we get the yellow pages and get a list of plumbers
in the plumber section.
- Green Pages. But, we don't know whether we're going to like them and be able to work with them; maybe they're a Spanish speaking company and we speak French. The green Pages solve this problem. The green pages let us find connectors that speak our language. The interface that a connector supports is the search key with the green pages. We can find the connectors that support the French plumber interface. We can also search using multiple interfaces. For example, find me the connectors that support the French plumber interface AND the VISA payment interface. A more concrete example may be that we're looking for a SOAP/MQ Series connector that supports an XML message format using a specific DTD.
Companies, in practice, would look for connectors that implement interfaces that their software applications support. There is no point in finding an online bookstore if the interface that the bookstore supports is incompatible with the one our procurement software understands. The preceding example merely highlights the need to standardize interfaces in vertical domains. Otherwise, we would still need custom programming to plug in a new partner, and this would defeat our purpose.
An existing mature technology that would also work equally well is Corba. Corba has several services that can implement the equivalent of UDDI.
This provides the yellow pages. It allows queries to be run over a set of Corba servers and the
resulting subset of servers returned to the client. An SQL like syntax is used for the query.
- CosNaming. This provides the white pages. It allows a server to be located using its name.
What it doesn't have (to my knowledge) is the equivalent of green pages. It does have the interface repository but you can't search it for an interface and then find the connectors that implement that interface. The other problem with the Corba approach is that it only works with IIOP whereas UDDI is protocol independent (we can even store telephone numbers in it.)
UDDI and J2EE
So, UDDI improves on this by consolidating the interfaces with the yellow/white pages and by allowing searches by all three aspects when trying to locate a connector. As indicated earlier, UDDI will replace the vendor specific JNDI directories that are used to find EJBs in most J2EE servers today or we can expect J2EE vendors to add UDDI facilities to their JNDI (White Pages equivalent) implementation.
I think that J2EE servers will become more open and will let you provide a JNDI implementation for the J2EE server to use. It should be possible to attach a JNDI provider to the white pages portion of a UDDI server.
UDDI and service deployment
We can expect to see corporate UDDI servers for hosting information on connectors available on the internal network and we can also expect to have the global public UDDI directories. These corporate UDDI servers may be built on top of LDAP directory servers; UDDI is just a set of interfaces that get layered on top of the LDAP infrastructure and are probably implemented using a J2EE server. This is a good thing if it is the case, as most companies will already have an LDAP/J2EE infrastructure in production or in the plans.
For each connector defined by a service registered with the service broker, we should be able to specify a list of UDDI (internal and public) servers with which to register the connector. There will be automatically generated connectors such as the ones that administer a service, starts/stops/cancels a service. These connectors are also registered with the UDDI server.
Service Brokers bring a lot to the table. The main benefit is managing complexity and enabling a component-type approach to developing large systems. The complex integration, security, and middleware problems are removed from the hands of application developers and solved by the services broker. Applications don't need to know where they fit in the overall business process; this knowledge is abstracted and encoded by the services broker. This allows business processes to change without effecting the underlying applications. It should also allow less technical people to do this. Cross-application security is also abstracted out. This is a good thing as the large suite of applications/technology platforms in large companies means that extremely few (and expensive) people will be experts in all of them. The services broker makes the problem of even tackling such an exercise much less risky, less expensive, and more reliable than in the past.
J2EE has a big opportunity here as it can provide the base on which these components can be built. However, the J2EE specification must be enhanced to really solve all the issues in building such components and indeed, several JSRs are already in the works towards helping us in this regard.
Web Services and B2B type products are starting to converge towards a 'Services Broker' type concept. Companies that want to be an eBusiness must consolidate their legacy and new applications together into a single eApplication. The services broker must be able to scale to handle such a complex application and make this formidable integration task viable. It needs to allow a diverse set of technologies to interoperate with it, to provide high level modeling tools, and to provide a workable security solution. You can be sure that companies such as IBM, BEA, SilverStream, WebMethods, Sybase, and Iona, as well as newcomers such as Invertica are looking at developing Service Brokers as a way to provide a 'complete' solution to make this eBusiness concept a reality.
|API||Application Programming Interface.|
|ASCII||The traditional 8 bits standard representing how to encode a character set using bytes.|
|WB2B||Business to Business. This is a term used to describe directly connecting a company's application to its partners.
|BPL||Business Process Query Language. A query for finding instances of business processes that match a certain specification.|
|BPM||Business Process Manager.|
|BPML||Business Process Management Language. A language for describing business processes.|
|CICS||The most widely used transaction-processing software in the
world. If you have a bank account or a credit card then it's probably a CICS system running it. Built by IBM. Interestingly, you rent it from IBM. You pay a variable price per month depending on how many MIPS you're CICS system consumes.
|Corba||Common Object Request Broker Architecture. A widely used
language independent object RPC mechanism on top of which many services were built. IIOP is one of the main legacies of Corba.
|CVS||An open source version control application. It is used by
groups of programmers to control version and track modifications to a set of files.
|EBCDIC||The character set encoding for IBM mainframes.|
|EIS||Enterprise Information System. This is an complex application
that implements an information system for a company. Such systems control human resources, payroll, inventory, accounts, ledgers etc.
|EJB||Enterprise Java Bean. A Java bean that runs inside a J2EE
container which enhances it so that it is remotely accessible and
has some additional runtime enhancements.
|HTTP||Hyper Text Transport Protocol. The high level protocol that
lets you browser fetch pages and now underlies technologies such as SOAP.
|IBM MQ Series.||IBMs transactional message transport. It runs on 35
different platform and has bindings for almost any language. It has
around 60-70% of this market segment.
|IIOP||The TCP/IP protocol used by Corba and J2EE servers for sending
requests over the wire.
|J2EE||Java 2 Enterprise Edition. The set of standards for
building enterprise Java applications.
|JCA||Java Connector Architecture. A set of standard interfaces for
accessing legacy systems such as SAP, Peoplesoft, CICS, IMS or
|JMS||Java Messaging Service. This is a standard set of interfaces
that message transport providers can implement. It defines a
programming standard for accessing messaging that is independent
of the actual transport in use.
|JSR||When a need for a new features for the Java platform is
identified, a JSR is created, approved and then provides a
specification for the new feature.
|LDAP||Lightweight Directory Access Protocol. A wire protocol
for accessing a directory. Directories are typically used by companies to hold information on services they offer, employee lists for authentication etc. They are also now used to find message queues, databases and J2EE services.
|MessageQ||Digital Equipment Corporations messaging product. It was
bought by BEA and is now called Tuxedo MessageQ.
|RMI/IIOP||This is RMI layer on top of IIOP. This is the
J2EE standard on the wire protocol for calling EJBs over a network.
|SOAP||Simple object request protocol. An RPC mechanism that uses
XML as its marshalling mechanism and is commonly used with HTTP as a protocol. It is one of the transports for Web Services.
|SQL||A text query language for searching relational databases.|
|UDDI||Universal Discovery and Description Integration. A new
directory for enabling web services applications. http://www.uddi.org
|UTF||A new multi-language standard for encoding character sets
using 8 bits or 16 bits. It is built in to most software at this point.
|WSDL||Web Services Definition Language. This lets us define an
interface that is implemented using a specific middleware
|WSFL||Web Services Flow Language. An XML document that describes a
|XML||A markup language that can be used to encode structured
information in a human readable form.