All predicatesShow sourcepengines.pl -- Pengines: Web Logic Programming Made Easy

The library(pengines) provides an infrastructure for creating Prolog engines in a (remote) pengine server and accessing these engines either from Prolog or JavaScript.

author
- Torbjörn Lager and Jan Wielemaker
Source pengine_create(:Options) is det
Creates a new pengine. Valid options are:
id(-ID)
ID gets instantiated to the id of the created pengine. ID is atomic.
alias(+Name)
The pengine is named Name (an atom). A slave pengine (child) can subsequently be referred to by this name.
application(+Application)
Application in which the pengine runs. See pengine_application/1.
server(+URL)
The pengine will run in (and in the Prolog context of) the pengine server located at URL.
src_list(+List_of_clauses)
Inject a list of Prolog clauses into the pengine.
src_text(+Atom_or_string)
Inject the clauses specified by a source text into the pengine.
src_url(+URL)
Inject the clauses specified in the file located at URL into the pengine.
src_predicates(+List)
Send the local predicates denoted by List to the remote pengine. List is a list of predicate indicators.

Remaining options are passed to http_open/3 (meaningful only for non-local pengines) and thread_create/3. Note that for thread_create/3 only options changing the stack-sizes can be used. In particular, do not pass the detached or alias options..

Successful creation of a pengine will return an event term of the following form:

create(ID, Term)
ID is the id of the pengine that was created. Term is not used at the moment.

An error will be returned if the pengine could not be created:

error(ID, Term)
ID is invalid, since no pengine was created. Term is the exception's error term.
Source translate_local_sources(+OptionsIn, -Options, +Module) is det[private]
Translate the src_predicates and src_list options into src_text. We need to do that anyway for remote pengines. For local pengines, we could avoid this step, but there is very little point in transferring source to a local pengine anyway as local pengines can access any Prolog predicate that you make visible to the application.

Multiple sources are concatenated to end up with a single src_text option.

Source pengine_send(+NameOrID, +Term) is det[private]
Same as pengine_send(NameOrID, Term, []).
Source pengine_send(+NameOrID, +Term, +Options) is det[private]
Succeeds immediately and places Term in the queue of the pengine NameOrID. Options is a list of options:
delay(+Time)
The actual sending is delayed by Time seconds. Time is an integer or a float.

Any remaining options are passed to http_open/3.

Source pengine_request(-Request) is det[private]
To be used by a pengine to wait for the next request. Such messages are placed in the queue by pengine_send/2. Keeps the thread in normal state if an event arrives within a second. Otherwise it waits for the idle_limit setting while using thread_idle/2 to minimis resources.
Source pengine_reply(+Event) is det[private]
Source pengine_reply(+Queue, +Event) is det[private]
Reply Event to the parent of the current Pengine or the given Queue. Such events are read by the other side with pengine_event/1.

If the message cannot be sent within the idle_limit setting of the pengine, abort the pengine.

Source pengine_ask(+NameOrID, @Query, +Options) is det
Asks pengine NameOrID a query Query.

Options is a list of options:

template(+Template)
Template is a variable (or a term containing variables) shared with the query. By default, the template is identical to the query.
chunk(+IntegerOrFalse)
Retrieve solutions in chunks of Integer rather than one by one. 1 means no chunking (default). Other integers indicate the maximum number of solutions to retrieve in one chunk. If false, the Pengine goal is not executed using findall/3 and friends and we do not backtrack immediately over the goal. As a result, changes to backtrackable global state are retained. This is similar that using set_prolog_flag(toplevel_mode, recursive).
bindings(+Bindings)
Sets the global variable '$variable_names' to a list of Name = Var terms, providing access to the actual variable names.

Any remaining options are passed to pengine_send/3.

Note that the predicate pengine_ask/3 is deterministic, even for queries that have more than one solution. Also, the variables in Query will not be bound. Instead, results will be returned in the form of event terms.

success(ID, Terms, Projection, Time, More)
ID is the id of the pengine that succeeded in solving the query. Terms is a list holding instantiations of Template. Projection is a list of variable names that should be displayed. Time is the CPU time used to produce the results and finally, More is either true or false, indicating whether we can expect the pengine to be able to return more solutions or not, would we call pengine_next/2.
failure(ID)
ID is the id of the pengine that failed for lack of a solutions.
error(ID, Term)
ID is the id of the pengine throwing the exception. Term is the exception's error term.
output(ID, Term)
ID is the id of a pengine running the query that called pengine_output/1. Term is the term that was passed in the first argument of pengine_output/1 when it was called.
prompt(ID, Term)
ID is the id of the pengine that called pengine_input/2 and Term is the prompt.

Defined in terms of pengine_send/3, like so:

pengine_ask(ID, Query, Options) :-
    partition(pengine_ask_option, Options, AskOptions, SendOptions),
    pengine_send(ID, ask(Query, AskOptions), SendOptions).
Source pengine_next(+NameOrID, +Options) is det
Asks pengine NameOrID for the next solution to a query started by pengine_ask/3. Defined options are:
chunk(+Count)
Modify the chunk-size to Count before asking the next set of solutions. This may not be used if the goal was started with chunk(false).

Remaining options are passed to pengine_send/3. The result of re-executing the current goal is returned to the caller's message queue in the form of event terms.

success(ID, Terms, Projection, Time, More)
See pengine_ask/3.
failure(ID)
ID is the id of the pengine that failed for lack of more solutions.
error(ID, Term)
ID is the id of the pengine throwing the exception. Term is the exception's error term.
output(ID, Term)
ID is the id of a pengine running the query that called pengine_output/1. Term is the term that was passed in the first argument of pengine_output/1 when it was called.
prompt(ID, Term)
ID is the id of the pengine that called pengine_input/2 and Term is the prompt.

Defined in terms of pengine_send/3, as follows:

pengine_next(ID, Options) :-
    pengine_send(ID, next, Options).
Source pengine_stop(+NameOrID, +Options) is det
Tells pengine NameOrID to stop looking for more solutions to a query started by pengine_ask/3. Options are passed to pengine_send/3.

Defined in terms of pengine_send/3, like so:

pengine_stop(ID, Options) :-
    pengine_send(ID, stop, Options).
Source pengine_abort(+NameOrID) is det
Aborts the running query. The pengine goes back to state `2', waiting for new queries.
See also
- pengine_destroy/1.
Source pengine_destroy(+NameOrID) is det
Source pengine_destroy(+NameOrID, +Options) is det
Destroys the pengine NameOrID. With the option force(true), the pengine is killed using abort/0 and pengine_destroy/2 succeeds.
 current_pengine(?Id, ?Parent, ?Location)[private]
Dynamic predicate that registers our known pengines. Id is an atomic unique datatype. Parent is the id of our parent pengine. Location is one of
Source pengine_register_local(+Id, +Thread, +Queue, +URL, +App, +Destroy) is det[private]
 pengine_register_remote(+Id, +URL, +Queue, +App, +Destroy) is det[private]
Source pengine_unregister(+Id) is det[private]
Source pengine_unregister(+Id)[private]
Called by the pengine thread destruction. If we are a remote pengine thread, our URL equals http and the queue is the message queue used to send events to the HTTP workers.
Source pengine_self(-Id) is det
True if the current thread is a pengine with Id.
Source protect_pengine(+Id, :Goal) is semidet[private]
Run Goal while protecting the Pengine Id from being destroyed. Used by the HTTP I/O routines to avoid that the Pengine's module disappears while I/O is in progress. We use a pool of locks because the lock may be held relatively long by output routines.

This also runs Goal if the Pengine no longer exists. This deals with Pengines terminated through destroy_or_continue/1.

bug
- After destroy_or_continue/1 takes the destroy route, the module may drop-out at any point in time, resulting in a possible crash. Seems the only safe way out is to do (de)serialization inside the Pengine.
Source pengine_application(+Application) is det
Directive that must be used to declare a pengine application module. The module must not be associated to any file. The default application is pengine_sandbox. The example below creates a new application address_book and imports the API defined in the module file adress_book_api.pl into the application.
:- pengine_application(address_book).
:- use_module(address_book:adress_book_api).
Source current_pengine_application(?Application) is nondet
True when Application is a currently defined application.
See also
- pengine_application/1
Source pengine_property(?Pengine, ?Property) is nondet
True when Property is a property of the given Pengine. Enumerates all pengines that are known to the calling Prolog process. Defined properties are:
self(ID)
Identifier of the pengine. This is the same as the first argument, and can be used to enumerate all known pengines.
alias(Name)
Name is the alias name of the pengine, as provided through the alias option when creating the pengine.
thread(Thread)
If the pengine is a local pengine, Thread is the Prolog thread identifier of the pengine.
remote(Server)
If the pengine is remote, the URL of the server.
application(Application)
Pengine runs the given application
module(Module)
Temporary module used for running the Pengine.
destroy(Destroy)
Destroy is true if the pengines is destroyed automatically after completing the query.
parent(Queue)
Message queue to which the (local) pengine reports.
source(?SourceID, ?Source)
Source is the source code with the given SourceID. May be present if the setting debug_info is present.
detached(?Time)
Pengine was detached at Time.
Source pengine_output(+Term) is det
Sends Term to the parent pengine or thread.
Source pengine_debug(+Format, +Args) is det
Create a message using format/3 from Format and Args and send this to the client. The default JavaScript client will call console.log(Message) if there is a console. The predicate pengine_rpc/3 calls debug(pengine(debug), '~w', [Message]). The debug topic pengine(debug) is enabled by default.
See also
- debug/1 and nodebug/1 for controlling the pengine(debug) topic
- format/2 for format specifications
Source local_pengine_create(+Options)[private]
Creates a local Pengine, which is a thread running pengine_main/2. It maintains two predicates:
Source thread_pool:create_pool(+Application) is det[multifile]
On demand creation of a thread pool for a pengine application.
Source create(+Queue, -Child, +Options, +URL, +Application) is det[private]
Create a new pengine thread.
Arguments:
Queue- is the queue (or thread handle) to report to
Child- is the identifier of the created pengine.
URL- is one of local or http
Source pengine_done is det
Called from the pengine thread at_exit option. Destroys child pengines using pengine_destroy/1. Cleaning up the Pengine is synchronised by the pengine_done mutex. See read_event/6.
Source pengine_main(+Parent, +Options, +Application)[private]
Run a pengine main loop. First acknowledges its creation and run pengine_main_loop/1.
 ask_to_term(+AskSpec, +Module, -Options, OptionsTail) is det[private]
Translate the AskSpec into a query, template and bindings. The trick is that we must parse using the operator declarations of the source and we must make sure variable sharing between query and answer template are known.
Source fix_streams is det[private]
If we are a pengine that is created from a web server thread, the current output points to a CGI stream.
Source pengine_prepare_source(:Application, +Options) is det[private]
Load the source into the pengine's module.
throws
- prepare_source_failed if it failed to prepare the sources.
Source prepare_module(+Module, +Application, +Options) is semidet[multifile]
Hook, called to initialize the temporary private module that provides the working context of a pengine. This hook is executed by the pengine's thread. Preparing the source consists of three steps:
  1. Add Application as (first) default import module for Module
  2. Call this hook
  3. Compile the source provided by the the src_text and src_url options
Arguments:
Module- is a new temporary module (see in_temporary_module/3) that may be (further) prepared by this hook.
Application- (also a module) associated to the pengine.
Options- is passed from the environment and should (currently) be ignored.
Source guarded_main_loop(+Pengine) is det[private]
Executes state `2' of the pengine, where it waits for two events:
destroy
Terminate the pengine
ask(:Goal, +Options)
Solve Goal.
Source solve(+Chunk, +Template, :Goal, +ID) is det[private]
Solve Goal. Note that because we can ask for a new goal in state `6', we must provide for an ancesteral cut (prolog_cut_to/1). We need to be sure to have a choice point before we can call prolog_current_choice/1. This is the reason why this predicate has two clauses.
Source set_projection(:Goal, +Bindings)[private]
findnsols_no_empty/4 copies its goal and template to avoid instantiation thereof when it stops after finding N solutions. Using this helper we can a renamed version of Bindings that we can set.
Source filter_template(+Template0, +Bindings, -Template) is det[private]
Establish the final template. This is there because hooks such as goal_expansion/2 and the SWISH query hooks can modify the set of bindings.
bug
- Projection and template handling is pretty messy.
Source more_solutions(+Pengine, +Choice, +State, +Time)[private]
Called after a solution was found while there can be more. This is state `6' of the state machine. It processes these events:
stop
Go back via state `7' to state `2' (guarded_main_loop/1)
next
Fail. This causes solve/3 to backtrack on the goal asked, providing at most the current chunk solutions.
next(Count)
As next, but sets the new chunk-size to Count.
ask(Goal, Options)
Ask another goal. Note that we must commit the choice point of the previous goal asked for.
Source ask(+Pengine, :Goal, +Options)[private]
Migrate from state `2' to `3'. This predicate validates that it is safe to call Goal using safe_goal/1 and then calls solve/3 to prove the goal. It takes care of the chunk(N) option.
Source prepare_goal(+Pengine, +GoalIn, -GoalOut, +Options) is det[private]
Prepare GoalIn for execution in Pengine. This implies we must perform goal expansion and, if the system is sandboxed, check the sandbox.

Note that expand_goal(Module:GoalIn, GoalOut) is what we'd like to write, but this does not work correctly if the user wishes to expand X:Y while interpreting X not as the module in which to run Y. This happens in the CQL package. Possibly we should disallow this reinterpretation?

Source prepare_goal(+Goal0, -Goal1, +Options) is semidet[multifile]
Pre-preparation hook for running Goal0. The hook runs in the context of the pengine. Goal is the raw goal given to ask. The returned Goal1 is subject to goal expansion (expand_goal/2) and sandbox validation (safe_goal/1) prior to execution. If this goal fails, Goal0 is used for further processing.
Arguments:
Options- provides the options as given to ask
Source pengine_not_sandboxed(+Pengine) is semidet[private]
True when pengine does not operate in sandboxed mode. This implies a user must be registered by authentication_hook/3 and the hook pengines:not_sandboxed(User, Application) must succeed.
Source not_sandboxed(+User, +Application) is semidet[multifile]
This hook is called to see whether the Pengine must be executed in a protected environment. It is only called after authentication_hook/3 has confirmed the authentity of the current user. If this hook succeeds, both loading the code and executing the query is executed without enforcing sandbox security. Typically, one should:
  1. Provide a safe user authentication hook.
  2. Enable HTTPS in the server or put it behind an HTTPS proxy and ensure that the network between the proxy and the pengine server can be trusted.
Source pengine_pull_response(+Pengine, +Options) is det
Pulls a response (an event term) from the slave Pengine if Pengine is a remote process, else does nothing at all.
Source pengine_input(+Prompt, -Term) is det
Sends Prompt to the master (parent) pengine and waits for input. Note that Prompt may be any term, compound as well as atomic.
Source pengine_respond(+Pengine, +Input, +Options) is det
Sends a response in the form of the term Input to a slave (child) pengine that has prompted its master (parent) for input.

Defined in terms of pengine_send/3, as follows:

pengine_respond(Pengine, Input, Options) :-
    pengine_send(Pengine, input(Input), Options).
Source send_error(+Error) is det[private]
Send an error to my parent. Remove non-readable blobs from the error term first using replace_blobs/2. If the error contains a stack-trace, this is resolved to a string before sending.
Source replace_blobs(Term0, Term) is det[private]
Copy Term0 to Term, replacing non-text blobs. This is required for error messages that may hold streams and other handles to non-readable objects.
Source remote_send_rec(+Server, +Action, +ID, +Params, -Reply, +Options)[private]
Issue a GET request on Server and unify Reply with the replied term.
Source probe(+Action, +URL) is det[private]
Probe the target. This is a good idea before posting a large document and be faced with an authentication challenge. Possibly we should make this an option for simpler scenarios.
Source pengine_event(?EventTerm) is det[private]
Source pengine_event(?EventTerm, +Options) is det[private]
Examines the pengine's event queue and if necessary blocks execution until a term that unifies to Term arrives in the queue. After a term from the queue has been unified to Term, the term is deleted from the queue.

Valid options are:

timeout(+Time)
Time is a float or integer and specifies the maximum time to wait in seconds. If no event has arrived before the time is up EventTerm is bound to the atom timeout.
listen(+Id)
Only listen to events from the pengine identified by Id.
Source pengine_event_loop(:Closure, +Options) is det
Starts an event loop accepting event terms sent to the current pengine or thread. For each such event E, calls ignore(call(Closure, E)). A closure thus acts as a handler for the event. Some events are also treated specially:
create(ID, Term)
The ID is placed in a list of active pengines.
destroy(ID)
The ID is removed from the list of active pengines. When the last pengine ID is removed, the loop terminates.
output(ID, Term)
The predicate pengine_pull_response/2 is called.

Valid options are:

autoforward(+To)
Forwards received event terms to slaves. To is either all, all_but_sender or a Prolog list of NameOrIDs. [not yet implemented]
Source pengine_rpc(+URL, +Query) is nondet
Source pengine_rpc(+URL, +Query, +Options) is nondet
Semantically equivalent to the sequence below, except that the query is executed in (and in the Prolog context of) the pengine server referred to by URL, rather than locally.
  copy_term_nat(Query, Copy),  % attributes are not copied to the server
  call(Copy),			 % executed on server at URL
  Query = Copy.

Valid options are:

chunk(+IntegerOrFalse)
Can be used to reduce the number of network roundtrips being made. See pengine_ask/3.
timeout(+Time)
Wait at most Time seconds for the next event from the server. The default is defined by the setting pengines:time_limit.

Remaining options (except the server option) are passed to pengine_create/1.

Source prompt(+ID, +Prompt, -Term) is semidet[multifile]
Hook to handle pengine_input/2 from the remote pengine. If the hooks fails, pengine_rpc/3 calls read/1 using the current prompt.
Source output(+ID, +Term) is semidet[multifile]
Hook to handle pengine_output/1 from the remote pengine. If the hook fails, it calls print/1 on Term.
Source http_pengine_create(+Request)[private]
HTTP POST handler for =/pengine/create=. This API accepts the pengine creation parameters both as application/json and as www-form-encoded. Accepted parameters:
ParameterDefaultComment
formatprologOutput format
applicationpengine_sandboxPengine application
chunk1Chunk-size for results
solutionschunkedIf all, emit all results
ask-The query
template-Output template
src_text""Program
src_url-Program to download
disposition-Download location

Note that solutions=all internally uses chunking to obtain the results from the pengine, but the results are combined in a single HTTP reply. This is currently only implemented by the CSV backend that is part of SWISH for downloading unbounded result sets with limited memory resources.

Using chunk=false simulates the recursive toplevel. See pengine_ask/3.

Source http_pengine_create(+Request, +Application, +Format, +OptionsDict)[private]
Source wait_and_output_result(+Pengine, +Queue, +Format, +TimeLimit) is det[private]
Wait for the Pengine's Queue and if there is a message, send it to the requester using output_result/1. If Pengine does not answer within the time specified by the setting time_limit, Pengine is aborted and the result is error(time_limit_exceeded, _).
Source create_wait_and_output_result(+Pengine, +Queue, +Format, +TimeLimit, +Dict) is det[private]
Intercepts the `solutions=all' case used for downloading results. Dict may contain a disposition key to denote the download location.
Source time_limit_exceeded(+Pengine, +Format)[private]
The Pengine did not reply within its time limit. Send a reply to the client in the requested format and interrupt the Pengine.
bug
- Ideally, if the Pengine has destroy set to false, we should get the Pengine back to its main loop. Unfortunately we only have normal exceptions that may be caught by the Pengine and abort which cannot be caught and thus destroys the Pengine.
Source destroy_queue_from_http(+Pengine, +Event, +Queue) is semidet[private]
Consider destroying the output queue for Pengine after sending Event back to the HTTP client. We can destroy the queue if
To be done
- If the client did not request all output, the queue will not be destroyed. We need some timeout and GC for that.
Source gc_abandoned_queues[private]
Check whether there are queues that have been abadoned. This happens if the stream contains output events and not all of them are read by the client.
Source sync_destroy_queue_from_http(+Pengine, +Queue) is det[private]
 sync_delay_destroy_queue(+Pengine, +Queue) is det[private]
Handle destruction of the message queue connecting the HTTP side to the pengine. We cannot delete the queue when the pengine dies because the queue may contain output events. Termination of the pengine and finishing the HTTP exchange may happen in both orders. This means we need handle this using synchronization.
sync_destroy_queue_from_pengine(+Pengine, +Queue)
Called (indirectly) from pengine_done/1 if the pengine's thread dies.
sync_destroy_queue_from_http(+Pengine, +Queue)
Called from destroy_queue/3, from wait_and_output_result/4, i.e., from the HTTP side.
Source sync_destroy_queue_from_pengine(+Pengine, +Queue)[private]
Called from pengine_unregister/1 when the pengine thread terminates. It is called while the mutex pengine held.
Source read_event(+Pengine, +Request, +Format, +EventString, -Event) is det[private]
Read an event on behalve of Pengine. Note that the pengine's module should not be deleted while we are reading using its syntax (module). This is ensured using the pengine_done mutex.
See also
- pengine_done/0.
 read_event_(+Request, +EventString, +Module, -Event, -Bindings)[private]
Read the sent event. The event is a Prolog term that is either in the event parameter or as a posted document.
Source discard_post_data(+Request) is det[private]
If this is a POST request, discard the posted data.
Source fix_bindings(+Format, +EventIn, +Bindings, -Event) is det[private]
Generate the template for json(-s) Format from the variables in the asked Goal. Variables starting with an underscore, followed by an capital letter are ignored from the template.
Source json_lang(+Format) is semidet[private]
True if Format is a JSON variation.
Source http_pengine_pull_response(+Request)[private]
HTTP handler for /pengine/pull_response. Pulls possible pending messages from the pengine.
Source http_pengine_abort(+Request)[private]
HTTP handler for /pengine/abort. Note that abort may be sent at any time and the reply may be handled by a pull_response. In that case, our pengine has already died before we get to wait_and_output_result/4.
Source http_pengine_detach(+Request)[private]
Detach a Pengine while keeping it running. This has the following consequences:
Source http_pengine_destroy_all(+Request)[private]
Destroy a list of pengines. Normally called by pengines.js if the browser window is closed.
Source http_pengine_ping(+Request)[private]
HTTP handler for /pengine/ping. If the requested Pengine is alive and event status(Pengine, Stats) is created, where Stats is the return of thread_statistics/2.
Source http_pengine_list(+Request)[private]
HTTP handler for `/pengine/list`, providing information about running Pengines.
To be done
- Only list detached Pengines associated to the logged in user.
Source output_result(+Format, +EventTerm) is det[private]
Source output_result(+Format, +EventTerm, +OptionsDict) is det[private]
Formulate an HTTP response from a pengine event term. Format is one of prolog, json or json-s.
Source portray_blob(+Blob, +Options) is det
Portray non-text blobs that may appear in output terms. Not really sure about that. Basically such terms need to be avoided as they are meaningless outside the process. The generated error is hard to debug though, so now we send them as '$BLOB'(Type). Future versions may include more info, depending on Type.
Source abort_pending_output(+Pengine) is det[private]
If we get an abort, it is possible that output is being produced for the client. This predicate aborts these threads.
Source write_result(+Lang, +Event, +Dict) is semidet[multifile]
Hook that allows for different output formats. The core Pengines library supports prolog and various JSON dialects. The hook event_to_json/3 can be used to refine the JSON dialects. This hook must be used if a completely different output format is desired.
Source disable_client_cache[private]
Make sure the client will not cache our page.
See also
- http://stackoverflow.com/questions/49547/making-sure-a-web-page-is-not-cached-across-all-browsers
Source add_error_details(+Error, +JSON0, -JSON)
Add format error code and location information to an error. Also used by pengines_io.pl.
Source add_error_code(+Error, +JSON0, -JSON) is det[private]
Add a code field to JSON0 of Error is an ISO error term. The error code is the functor name of the formal part of the error, e.g., syntax_error, type_error, etc. Some errors carry more information:
existence_error(Type, Obj)
{arg1:Type, arg2:Obj}, where Obj is stringified of it is not atomic.
Source add_error_location(+Error, +JSON0, -JSON) is det[private]
Add a location property if the error can be associated with a source location. The location is an object with properties file and line and, if available, the character location in the line.
Source event_to_json(+Event, -JSONTerm, +Lang) is semidet[multifile]
Hook that translates a Pengine event structure into a term suitable for reply_json/1, according to the language specification Lang. This can be used to massage general Prolog terms, notably associated with success(ID, Bindings, Projection, Time, More) and output(ID, Term) into a format suitable for processing at the client side.
Source allowed(+Request, +Application) is det[private]
Check whether the peer is allowed to connect. Returns a forbidden header if contact is not allowed.
Source authenticate(+Request, +Application, -UserOptions:list) is det[private]
Call authentication_hook/3, returning either [user(User)], [] or an exception.
Source authentication_hook(+Request, +Application, -User) is semidet[multifile]
This hook is called from the =/pengine/create= HTTP handler to discover whether the server is accessed by an authorized user. It can react in three ways:
See also
- http_authenticate/3 can be used to implement this hook using default HTTP authentication data.
Source pengine_user(-User) is semidet
True when the pengine was create by an HTTP request that authorized User.
See also
- authentication_hook/3 can be used to extract authorization from the HTTP header.
Source reply_options(+Request, +Methods) is semidet[private]
Reply the HTTP OPTIONS request
Source pengine_src_text(+SrcText, +Module) is det[private]
Asserts the clauses defined in SrcText in the private database of the current Pengine. This predicate processes the `src_text' option of pengine_create/1.
Source pengine_src_url(+URL, +Module) is det[private]
Asserts the clauses defined in URL in the private database of the current Pengine. This predicate processes the `src_url' option of pengine_create/1.
To be done
- : make a sensible guess at the encoding.

Re-exported predicates

The following predicates are exported from this file while their implementation is defined in imported modules or non-module files loaded by this module.

Source pengine_rpc(+URL, +Query) is nondet
Source pengine_rpc(+URL, +Query, +Options) is nondet
Semantically equivalent to the sequence below, except that the query is executed in (and in the Prolog context of) the pengine server referred to by URL, rather than locally.
  copy_term_nat(Query, Copy),  % attributes are not copied to the server
  call(Copy),			 % executed on server at URL
  Query = Copy.

Valid options are:

chunk(+IntegerOrFalse)
Can be used to reduce the number of network roundtrips being made. See pengine_ask/3.
timeout(+Time)
Wait at most Time seconds for the next event from the server. The default is defined by the setting pengines:time_limit.

Remaining options (except the server option) are passed to pengine_create/1.

Source pengine_destroy(+NameOrID) is det
Source pengine_destroy(+NameOrID, +Options) is det
Destroys the pengine NameOrID. With the option force(true), the pengine is killed using abort/0 and pengine_destroy/2 succeeds.
Source pengine_event(?EventTerm) is det
Source pengine_event(?EventTerm, +Options) is det
Examines the pengine's event queue and if necessary blocks execution until a term that unifies to Term arrives in the queue. After a term from the queue has been unified to Term, the term is deleted from the queue.

Valid options are:

timeout(+Time)
Time is a float or integer and specifies the maximum time to wait in seconds. If no event has arrived before the time is up EventTerm is bound to the atom timeout.
listen(+Id)
Only listen to events from the pengine identified by Id.