A receiver could get a spurious empty tube status, due to
receive_tube() racing with send_tube(). See the added comments into
the code for details about the resolution.
At this chance, guard against load/store tearing on shared pointers.
Pending issue: we still have a potential connectivity issue between
the prep and finish ops when pushing to a tube.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Refrain from inlining core services, so that interposing on them via
dynamic linking tricks is made easier.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
evl_get_current() contains implementation details which are definitely
not part of the API and the way this works should not be exposed.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Refrain from inlining core services, so that interposing on them via
dynamic linking tricks is made easier.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
What evl_read_clock() does is non-trivial, enough to call for an out
of line implementation.
In general, refrain from inlining core services, so that interposing
on them via dynamic linking tricks is made easier.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
We have no more in-tree users of these. Besides, let's assume that the
CPU's branch predictor always has better clues than we might have when
assessing the likeliness of a condition.
Bonus: this clears a recurring source of namespace clashes with C++
frameworks like Boost.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
In user-space at least, we'd be better off trusting the CPU's branch
predictor, instead of relying on our limited perception when it comes
to determining the likeliness of a condition, or every compiler to do
the right thing with respect to efficient branching.
We only have a couple of likely predictions in-tree on straightforward
conditions from the tube implementation code, which we can remove
safely.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
This mode measures the delay between the moment a synthetic interrupt
is posted from the oob stage and when it is eventually received by its
in-band handler. When measured under significant pressure, this gives
the typical interrupt latency experienced by the in-band kernel due to
local interrupt disabling.
Therefore, this is an in-band only test which measures IRQ latency
experienced by the common kernel infrastructure, _NOT_ by EVL.
Measurement is requested with '-s' option.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Since ABI 23, the core is able to channel T_WOSS, T_WOLI and T_WOSX
error notifications through the offender's observable component if
present.
Convert all SIGDEBUG_xxx cause codes to the new EVL_HMDIAG_xxx naming,
so that we have a single nomenclature for these errors regardless of
whether threads are notified via SIGDEBUG or their observable
component.
The API rev. is bumped to #17 as a result of these changes.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Since ABI 23, the core provides the new observable element, which
enables the observer design pattern. Any EVL thread is in and of
itself an observable which can be monitored for events too.
As a by-product, the poll interface can now be given a user-defined
opaque data when subscribing file descriptors to poll elements, which
the core passes back on return to evl_poll().
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Open-coding oob_ioctl() calls to set/clear mode bits in apps is
unhandy and fairly ugly. Let's provide sanctioned services for these
requests, namely evl_set_thread_mode() and evl_clear_thread_mode()
respectively.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Starting with ABI 22, we can ask for the core to unblock a thread from
a wait state, which may include forcing it out of any real-time
scheduling class by demoting it to SCHED_WEAK in the same move.
Export evl_unblock_thread() and evl_demote_thread() as the
corresponding wrappers.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Since core ABI 21, users can decide whether a new element should be
made public or private depending on the value of clone flags added to
the new long form of all element creation calls, i.e. evl_create_*().
All evl_new_*() calls become a shorthand for their respective long
form with reasonable default arguments, including private visibility.
As a shorthand, libevl also interprets a slash character leading the
name argument passed to these services as an implicit request for
creating a public element. In other words, this is the same as passing
EVL_CLONE_PUBLIC in the clone flags.
A public element appears as a cdev in the /dev/evl hierarchy, which
means that it is visible to other processes, which may share it. On
the contrary, a private element is only known from the process
creating it, although it does appear in the /sysfs hierarchy
regardless.
e.g.:
efd = evl_attach_self("/visible-thread");
total 0
crw-rw---- 1 root root 248, 1 Apr 17 11:59 clone
crw-rw---- 1 root root 246, 0 Apr 17 11:59 visible-thread
or,
efd = evl_attach_self("private-thread");
total 0
crw-rw---- 1 root root 248, 1 Apr 17 11:59 clone
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Given that evl_sched_control() was hardly usable prior to the latest
fixes to the scheduler control code in the core and no feedback ever
happened about such issues, we may assume that such call has no user
yet. Take this opportunity to fix a naming inconsistency in the API.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
sched_priority is implemented as a macro by glibc which serves as a
wrapper to the real attribute field in schedparam. Some architectures
may use this trick to hide the actual identifier from the user code,
so make sure to always pull <sched.h> before referencing
sched_priority in uapi/ headers, so that such wrapping also happens
when struct evl_sched_attrs is defined.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
These changes fix up the argument passed to the core system calls in
order to abide by the new ABI allowing 32bit applications to issue
requests to 64bit kernels.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
This set of changes makes libevl y2038-safe by switching to the ABI
revision 19 of the EVL core, which generalizes the use of a 64bit
timespec type. These changes also go a long way preparing for the
upcoming mixed 32/64 ABI support (aka compat mode).
The changes only affect the internal interface between libevl and the
kernel, not the API. Nevertheless, the API was bumped to revision 10
with the removal of the evl_adjust_clock() service, which neither had
proper specification nor defined use case currently, but would stand
in the way of the sanitization work for y2038. At any rate, any future
service implementing some sort of EVL clock adjustment should
definitely not depend on the legacy struct timex which is
y2038-unsafe.
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Instead of matching whatever ABI we might be compiled against like
previously, define the kernel ABI we need as a prerequisite
(EVL_KABI_PREREQ), checking for sanity at build time and runtime.
This prerequisite is matched against the range of ABI revisions the
kernel supports (from EVL_ABI_BASE to EVL_ABI_CURRENT). In the
simplest case, the kernel implements a single ABI with no backward
compatibility mechanism (EVL_ABI_BASE == EVL_ABI_CURRENT).
This addresses two issues:
- the fact that libevl might build against a given set of uapi/ files
does not actually mean that the corresponding kernel ABI found there
is fully compatible with what libevl expects. Specifying a
compatible ABI prereq explicitly addresses this problem.
- we can obtain services from EVL cores supporting multiple ABI
revisions (i.e. providing backward compat feat).
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
evl_udelay() was an annoying misnomer for people with kernel
development background, as this relates to a busy wait loop, not to a
sleeping call, which evl_udelay() actually was.
Rename this call to evl_usleep(), converging to the glibc signature
for usleep(3) in the same move.
Just like DECLARE_EVL_TUBE_CANISTER() should be used explicitly for
declaring a new canister type for a local tube, the same should be
done with DECLARE_EVL_TUBE_CANISTER_REL() for a shared tube. Stop
calling the latter implicitly from DECLARE_EVL_TUBE_REL(), to remain
consist with DECLARE_EVL_TUBE() usage.
A lighweight, lockless multi-reader/multi-writer FIFO with a
base-offset variant which can work over a memory segment shared
between processes. Scalar data or simple (small!) aggregates are
conveyed though a tube inside canisters which exactly fit their type.
By design, a tube is meant to be a basic, spartan mechanism: it
imposes no usage policy whatsoever on users.
As a result, a tube is strictly non-blocking, it solely detects and
notifies the caller on return about empty input (no message in) and
output contention (no buffer space for output). If required, the
caller can implement blocking states, typically with a pair of EVL
semaphores, e.g. setting up a tube conveying integers which supports
blocking mode:
DECLARE_EVL_TUBE_CANISTER(canister_type, int); /* defines struct canister_type */
DECLARE_EVL_TUBE(tube_type, cannister_type) tube;
struct cannister_type items[1024];
evl_init_tube(&tube, items, 1024);
evl_new_sem(&in, ...);
evl_new_sem_any(&out, CLOCK_MONOTONIC, tube.max_items, ...);
evl_get_sem(&in, ...); @ evl_get_sem(&out);
evl_tube_receive(&tube, ...); @ evl_tube_send(&tube, ...);
evl_put_sem(&out); @ evl_put_sem(&in, ...);
Returns version information about the running EVL interface including
the API and kernel ABI levels. A negative ABI level denotes the EVL
shim layer (eshi).
Events timed on the monotonic clock is the most common form used by
applications. Allow people to write more compact code by providing
creation calls and static initializers aimed at building these
directly:
- evl_new_event(), EVL_EVENT_INITIALIZER() for events timed on the
monotonic clock.
- evl_new_event_any() and EVL_EVENT_ANY_INITIALIZER() usable for
specifying the clock.