libev 介绍

名称

libev - 一个 C 编写的功能全面的高性能事件循环。

概要

#include <ev.h>

示例程序

// a single header file is required
#include <ev.h>
 
#include <stdio.h> // for puts
 
// every watcher type has its own typedef'd struct
// with the name ev_TYPE
ev_io stdin_watcher;
ev_timer timeout_watcher;
 
// all watcher callbacks have a similar signature
// this callback is called when data is readable on stdin
static void
stdin_cb (EV_P_ ev_io *w, int revents)
{
  puts ("stdin ready");
  // for one-shot events, one must manually stop the watcher
  // with its corresponding stop function.
  ev_io_stop (EV_A_ w);
 
  // this causes all nested ev_run's to stop iterating
  ev_break (EV_A_ EVBREAK_ALL);
}
 
// another callback, this time for a time-out
static void
timeout_cb (EV_P_ ev_timer *w, int revents)
{
  puts ("timeout");
  // this causes the innermost ev_run to stop iterating
  ev_break (EV_A_ EVBREAK_ONE);
}
 
int
main (void)
{
  // use the default event loop unless you have special needs
  struct ev_loop *loop = EV_DEFAULT;
 
  // initialise an io watcher, then start it
  // this one will watch for stdin to become readable
  ev_io_init (&stdin_watcher, stdin_cb, /*STDIN_FILENO*/ 0, EV_READ);
  ev_io_start (loop, &stdin_watcher);
 
  // initialise a timer watcher, then start it
  // simple non-repeating 5.5 second timeout
  ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
  ev_timer_start (loop, &timeout_watcher);
 
  // now wait for events to arrive
  ev_run (loop, 0);
 
  // break was called, so exit
  return 0;
}

关于 libev

Libev 是一个事件循环:你注册对某些事件感兴趣(比如文件描述符可读或超时发生),它将管理这些事件源并为你的程序提供事件。

为了做到这一点,它必须通过执行 事件循环 处理器或多或少完全接管你的进程(或线程),然后通过回调机制通知事件。

你通过注册所谓的 事件观察者 注册对某些事件感兴趣,它们都是你以事件的详细信息初始化的非常小的 C 结构,然后通过 starting 观察者移交给 libev 的。

特性

Libev 支持文件描述符事件的 selectpoll,Linux 特有的 epoll,BSD 特有的 kqueue 以及 Solaris 特有的事件端口机制 (ev_io),Linux 的 inotify 接口 (ev_stat),Linux eventfd/signalfd(用于更快更干净的线程间唤醒 (ev_async)/信号处理 (ev_signal)),相对定时器 (ev_timer),定制重新调度逻辑的绝对定时器 (ev_periodic),同步的信号 (ev_signal),进程状态变化事件 (ev_child),以及处理事件循环机制自身的事件观察者 (ev_idleev_embedev_prepareev_check 观察者) 和文件观察者 (ev_stat),甚至是对 fork 事件的有限支持 (ev_fork)。

它还相当块。

约定

Libev 是充分可配置的。在这份手册中描述默认的(及最常见的)配置,其支持多事件循环。更多关于各种配置选项的信息请参考本手册 EMBED 一节。如果 libev 被配置为不支持多个事件循环,那么使用名为 loop 的初始参数(总是类型为 struct ev_loop * )的所有函数将不具有此参数。

时间表示

Libev 将时间表示为一个浮点数表示时,表示自(POSIX)新纪元(实际上自 1970 年左右开始,细节很复杂,不要问)开始的秒数(小数)。这个类型被称为 ev_tstamp,它也应该是你使用的类型。在 C 中它通常是 double 类型的别名。当你需要对它做任何计算时,你应该将它视为一些浮点值。

不像其名字中的 stamp 组件可能指示的那样,在 libev 中它也用于时间差值(比如延迟)。

错误处理

Libev 认为存在三种类型的错误:操作系统错误,用法错误和内部错误(bugs)。

当 libev 捕获了一个它无法处理的操作系统错误(比如一个系统调用指示了一个 libev 无法修复的条件)时,它调用通过 ev_set_syserr_cb 设置的回调,它应该修复问题或终止。默认的回调是打印一条诊断信息并调用 abort()

当 libev 探测到一个用法错误,比如一个负的定时器间隔,则它将打印一条诊断信息并终止(通过 assert 机制,NDEBUG 将禁用这项检查):这些是 libev 调用者中出现的编程错误,且需要在那里被修复。

Libev 还有一些内部的错误检查断言,以及一些扩展的一致性检查代码。通常情况下它们不会被触发,它们通常表示 libev 中出现了一个 bug 或更糟。

全局函数

这些函数可以随时调用,甚至在以任何方式初始化库之前。

ev_tstamp ev_time ()

返回以 libev 所使用的格式的当前时间。注意 ev_now 函数通常更快,且也常常返回你实际想知道的时间戳。ev_now_updateev_now 的结合也很有意思。

ev_sleep (ev_tstamp interval)

休眠给定的时间:当前线程将阻塞,直到它被中断或经过了给定的时间间隔(大约 - 即使不中断,它可能也会早返回一点)。如果 interval <= 0 就立即返回。

基本上这是一个粒度比秒更高的 sleep()

interval 的范围是有限的 - libev 只保证最长一天 (interval <= 86400) 的休眠时间是可以工作的。

int ev_version_major ()

int ev_version_minor ()

你可以通过调用函数 ev_version_majorev_version_minor 找到你链接的库的主次 ABI 版本号。如果你想,你可以拿来与全局符号 EV_VERSION_MAJOREV_VERSION_MINOR 做比较,它们描述了你的程序编译所基于的库的版本。

这些版本号引用了库的 ABI 版本,而不是发布版本。

通常,主板本号不匹配时终止程序比较好,因为这指示了不兼容的修改。次版本号通常都与老版本兼容,因此,更大的次版本号通常都不是什么问题。

示例:确保我们没有被无意地链接到错误的版本(注意,然而,这无法探测其它的 ABI 不匹配,比如 LFS 或可重入性)。

assert (("libev version mismatch",
         ev_version_major () == EV_VERSION_MAJOR
         && ev_version_minor () >= EV_VERSION_MINOR));

unsigned int ev_supported_backends ()

返回编译进 libev 二进制(独立于你正在运行的系统上它们的可用性)中的所有后端的集合(比如,它们的对应 EV_BACKEND_* 值)。参考 ev_default_loop 获得这些值的描述。

示例:确保我们具有 epoll 方法,因为是的,这是很酷,一定有,我们可以有一个它的洪流!!!11

assert (("sorry, no epoll, no sex",
         ev_supported_backends () & EVBACKEND_EPOLL));

unsigned int ev_recommended_backends ()

返回编译进 libev 二进制文件且建议本平台使用的所有后端的集合,意味着它将可以用于大多数的文件描述符类型。这个集合通常比 ev_supported_backends 返回的要小,例如,大多数 BSD 上的 kqueue 都不会使用,除非你明确要求(假设你知道你在做什么),否则不会自动检测。如果你没有显式地指定,这是 libev 将探测的后端集合。

unsigned int ev_embeddable_backends ()

返回其它事件循环中可嵌入的后端的集合。这个值是平台特有的,但可以包含当前系统不可用的后端。为了找出当前系统可能支持的可嵌入后端,你需要查看 ev_embeddable_backends () & ev_supported_backends (),同样的建议采用的那些。

参考 ev_embed 观察者的描述来获得更多信息。

ev_set_allocator (void (cb)(void *ptr, long size) throw ())

设置所用的分配函数(原型都是类似的 - 语义与 realloc C89/SuS/POSIX 函数一致)。它用于分配和释放内存(这里不要惊讶)。如果当需要分配内存时它返回零 (size != 0),库可能会终止或执行一些潜在的破坏性的行为。

由于一些系统(至少是 OpenBSD 和 Darwin)无法实现正确的 realloc 语义,libev 将默认使用一个基于系统的 reallocfree 函数的封装。

你可以在高可用性程序中覆盖这个函数,比如,如果它无法分配内存就释放一些内存,使用一个特殊的分配器,或者甚至是休眠一会儿并重试直到有内存可用。

示例:用一个等待一会儿并重试的分配器替换 libev 分配器(例子需要一个与标准兼容的 realloc)。

static void *
persistent_realloc (void *ptr, size_t size)
{
  for (;;)
    {
      void *newptr = realloc (ptr, size);
 
      if (newptr)
        return newptr;
 
      sleep (60);
    }
}
 
. . .
ev_set_allocator (persistent_realloc);

ev_set_syserr_cb (void (*cb)(const char *msg) throw ())

设置在一个可重试系统调用错误(比如 select,poll,epoll_wait 失败)发生时调用的回调函数。消息是一个可打印的字符串,表示导致问题产生的系统调用或子系统。如果设置了这个回调,则 libev 将期待它补救这种状况,无论何时何地它返回。即 libev 通常将重试请求的操作,或者如果条件没有消失,执行 bad stuff(比如终止程序)。

示例:这基本上也是 libev 内部所做的事情。

static void
fatal_error (const char *msg)
{
  perror (msg);
  abort ();
}
 
. . .
ev_set_syserr_cb (fatal_error);

ev_feed_signal (int signum)

这个函数可被用于 “模拟” 一个信号接收。在任何时候,任何上下文,包括信号处理器或随机线程,调用这个函数都是完全安全的。

它的主要用途是在你的进程中定制信号处理。比如,你可以默认在所有线程中阻塞信号(当创建任何 loops 时指定 EVFLAG_NOSIGMASK),然后在一个线程中,使用 sigwait 或其它的机制来等待信号,然后通过调用 ev_feed_signal 将它们
“传送” 给 libev。

控制事件循环的函数

事件循环有一个 struct ev_loop * 描述(在这个场景下 struct 不是 可选的,除非 libev 3 兼容性被禁用,因为 libev 3 有一个 ev_loop 函数与结构体名字冲突)。

库了解两种类型的循环,default 循环支持子进程事件,而动态创建的事件循环不支持。

struct ev_loop *ev_default_loop (unsigned int flags)

它返回 "default" 的事件循环对象,它是你通常在只想要个 "事件循环" 时应该使用的。事件循环对象和 flags 参数在 ev_loop_new 的部分会有更详细的描述。

如果默认的循环已经初始化了,则这个函数简单的返回它(并忽略 flags。如果这令你烦恼,则检查 ev_backend())。否则它将以给定的 flags 创建它,这几乎总是 0,除非调用者也是 ev_run 的调用者,或这否则是 “主程序”。

如果你不知道使用什么事件循环,则使用这个函数返回的那个(或通过 EV_DEFAULT 宏)。

注意这个函数 不是 线程安全的,因此如果你想在多个线程中使用它,则你不得不使用某种互斥量(还要注意,这种情况是不可能的,因为循环不能在线程之间容易地共享)。

默认的循环是仅有的可以处理 ev_child 观察者的循环,为了做到这一点,它总是为 SIGCHLD 注册一个处理程序。如果这对你的应用是一个问题,你可以通过 ev_loop_new 创建一个动态的循环,它不会那样做,或你可以简单地在调用 ev_default_init 之后 覆盖 SIGCHLD 信号处理程序。

示例:这是最典型的用法。

if (!ev_default_loop (0))
  fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?");

示例:限制 libev 使用 select 和 poll,且不允许把环境设置考虑进去:

ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV);

struct ev_loop *ev_loop_new (unsigned int flags)

这将创建并初始化一个新的事件循环对象。如果循环无法初始化,则返回 false。

这个函数是线程安全的,与线程一起使用 libev 的一种常见方式是为每个线程创建一个循环,并在 “主” 或 “初始化” 线程中使用默认的循环。

flags 参数可被用于指定特殊的行为或要使用的特定后端,且通常被指定为 0(或 EVFLAG_AUTO)。

libev 支持下列标记:

  • EVFLAG_AUTO
    默认的 flags 值。如果你没有线索就是用它(这是对的,相信我)。

  • EVFLAG_NOENV
    如果标记值中设置了这个标记位(或程序以 setuid 或 setgid 运行),则 libev 将 不会 查看环境变量 LIBEV_FLAGS。否则(默认的),如果在环境中找到了标记则该环境变量将完全覆盖标记。这在尝试特定的后端来测试其性能,绕过 bugs,或使得 libev 线程安全(访问环境变量无法以线程安全的方式完成,但通常在没有其它线程修改它们时可以工作)时很有用。

  • EVFLAG_FORKCHECK
    除了在 fork 之后手动地调用 ev_loop_fork,你还可以通过启用这个标记让 libev 在每个迭代中检查 fork。

它通过在循环的每一次迭代中调用 getpid() 来工作,如果你执行大量的循环迭代但只做一点实际的工作,则这将可能会降低你的事件循环的速度,但它通常不明显(比如在我的 GNU/Linux 系统上,getpid 实际上是一个简单的 5-insn 序列而没有系统调用,因此非常块,但是我的 GNU/Linux 还有 pthread_atfork,它甚至更快)。

当你使用这个标记时这个标记的巨大的好处是你可以忘记 fork(并忘记忘记告诉 libev 关于 fork,尽管你依然不得不忽略 SIGPIPE)。

这个标记不能被 LIBEV_FLAGS 环境变量的值覆盖或指定。

  • EVFLAG_NOINOTIFY
    这个标记被指定时,则 libev 将不会试图为它的 ev_stat 观察者使用 inotify API。除了调试和测试之外,这个标志对于保全 inotify 文件描述符是非常有用的,否则使用 ev_stat 观察者的每个循环消耗一个
    inotify 句柄。

  • EVFLAG_SIGNALFD
    当设置这个标记时,则 libev 将试图为它的 ev_signal (和 ev_child) 观察者使用 signalfd API。这个 API 同步地传递信号,这使它更快且可能使它能够获得入队的信号数据。只要你在对处理信号不感兴趣的线程中正确地阻塞信号,它也可以简化多线程中的信号处理。

默认情况下,signalfd 不会被使用,因为这会改变你的信号掩码,并且有很多很好的库和程序(例如,glib 的线程池)无法正确初始化它们的信号掩码

  • EVFLAG_NOSIGMASK
    当指定这个标记时,则 libev 将避免修改信号掩码。特别地,这意味着当你想接收信号时你不得不确保它们是未阻塞的。

当你想要执行你自己的信号处理,或想要仅在特定的线程中处理信号并想要避免 libev 不阻塞信号时,这个行为很有用。

在一个线程的程序中它也是 POSIX 要求的,由于 libev 调用 sigprocmask,其行为是未正式定义的。

这个标记的行为将在未来的 libev 版本中变为默认的行为。

  • EVBACKEND_SELECT (值为 1,可移植的 select 后端)
    这是你的标准 select(2) 后端。不完全标准,因为 libev 尝试滚动自己的 fd_set 而不限制 fds 的数量,但是如果失败,则期望在使用此后端时 fds 的数量相当低的限制。它不能太好地放缩(O(highest_fd)),但对于少量的(low-numbered :)fds 它通常是最快的后端。
    为了从这个后端获得良好的性能你需要大量的并发(大多数文件描述符应该处于忙碌状态)。如果你在编写一个服务器,你应该在循环的
    accept() 的一个迭代中接受尽可能多的连接。你也许会想要看一下 ev_set_io_collect_interval() 来增加每个迭代你获得的可读性通知的数量。
    这个后端把 EV_READ 映射到 readfds 集合,并把 EV_WRITE 映射到 writefds 集合(为了绕过 Microsoft Windows bugs,还可以在该平台上设置的 exceptfds)。

  • EVBACKEND_POLL (值为 2,poll 后端,除了 windows 外的其它地方都可用)
    这是你的标准 poll(2) 后端。它比 select 更复杂,但对稀疏 fds 的处理更好,且对你可以使用的 fds 的个数没有人为限制(除了在非活跃 fds 比较多时,它将大大减慢)。参考上面的 EVBACKEND_SELECT 的条目,获得性能提示。
    这个后端把 EV_READ 映射为 POLLIN | POLLERR | POLLHUP,把 EV_WRITE 映射为 POLLOUT | POLLERR | POLLHUP

  • EVBACKEND_EPOLL (value 4, Linux)
    使用 linux 特有的 epoll(7) 接口(2.6.9 之前和之后的内核版本都是)。
    对于一些 fds,这个后端可能比 poll 和 select 慢一点,但它的表现更好。尽管 poll 和 select 通常的表现大概为 O(total_fds),其中 total_fds 是 fds 的总个数(或最高的 fd), epoll 的表现为 O(1) 或 O(active_fds)。
    epoll 值得可敬的提及,作为更高级事件机制的最为错误的设计:仅有的烦恼包括安静地丢弃文件描述符,每个文件描述符的每次改变要求一个系统调用(及不必要地参数猜测),dup 的问题,在超时值之前返回,导致额外的迭代(精度只有 5 ms,而 select 在相同的平台上精度为 0.1 ms)等等。然而最大的问题是 fork 竞争 - 如果一个程序了 fork 了则父和子进程 不得不重建 epoll 集合,这将消耗相当大的时间(每个文件描述符一次系统调用)且当然难以探测。
    Epoll 也是臭名昭著的 - 嵌入式的 epoll fds 应该 工作,但是当然 不能,相对于向集合中注册(特别是在 SMP 系统上),epoll 只喜欢报告完全 不同的 文件描述符(甚至是已经关闭的那些,因此甚至无法从集合中移除它们)的事件。Libev 试图通过使用一个额外的生成计数器来对付这些虚假的通知,并将其与事件进行比较以过滤掉虚假的通知,并在需要的时候重建集合。Epoll 也错误地舍弃了超时,但是没有办法知道什么时候和多少,所以有时你必须忙-等待,因为 epoll 会立即返回,尽管超时非零。最后,它也拒绝使用可以在 select(文件,许多字符设备...)中完美使用的一些文件描述符。
    Epoll 真的是事件 poll 机制中的火车残骸,一个 frankenpoll,匆忙拼凑在一起,没有想到设计或与他人互动。 哦,痛苦,会不会停止 . . .
    尽管在相同的迭代中停止,设置并启动 I/O 观察者将导致一些缓存,然而每个这种事件仍然有一个系统调用(因为现在相同的 文件描述符 可能指向不同的 文件描述),所以最好避免。而且,如果你为两个文件描述注册事件,dup() 的文件描述符可能不能很好的工作。
    该后端的最佳性能是通过在关闭之前尽可能不注销文件描述符的所有观察者来实现的,比如任何时候每个 fd 都保持至少一个观察者活跃。停止并启动一个观察者(没有重新设置它)也通常不导致额外的开销。一个 fork 可能同时导致虚假的通知,及 libev 不得不销毁并重建 epoll 对象,这可能消耗大量的时间,且这是应该避免的。
    所有的这些意味着,在实践上,EVBACKEND_SELECT 对于至多上百个文件描述符可能像 epoll 一样快或更快,依赖于用法。多么的悲伤啊。
    尽管名义上可以嵌入到其它事件循环中,但到目前为止这个功能在所有内核版本上的测试都是烂的。
    这个后端映射 EV_READEV_WRITE 的方式与 EVBACKEND_POLL 的相同。

  • EVBACKEND_KQUEUE (值为 8, most BSD clones)
    Kqueue deserves special mention, as at the time of this writing, it was broken on all BSDs except NetBSD (usually it doesn't work reliably with anything but sockets and pipes, except on Darwin, where of course it's completely useless). Unlike epoll, however, whose brokenness is by design, these kqueue bugs can (and eventually will) be fixed without API changes to existing programs. For this reason it's not being "auto-detected" unless you explicitly specify it in the flags (i.e. using EVBACKEND_KQUEUE) or libev was compiled on a known-to-be-good (-enough) system like NetBSD.
    You still can embed kqueue into a normal poll or select backend and use it only for sockets (after having made sure that sockets work with kqueue on the target platform). See ev_embed watchers for more info.
    It scales in the same way as the epoll backend, but the interface to the kernel is more efficient (which says nothing about its actual speed, of course). While stopping, setting and starting an I/O watcher does never cause an extra system call as with EVBACKEND_EPOLL, it still adds up to two event changes per incident. Support for fork () is very bad (you might have to leak fd's on fork, but it's more sane than epoll) and it drops fds silently in similarly hard-to-detect cases.
    This backend usually performs well under most conditions.
    While nominally embeddable in other event loops, this doesn't work everywhere, so you might need to test for this. And since it is broken almost everywhere, you should only use it when you have a lot of sockets (for which it usually works), by embedding it into another event loop (e.g. EVBACKEND_SELECT or EVBACKEND_POLL (but poll is of course also broken on OS X)) and, did I mention it, using it only for sockets.
    This backend maps EV_READ into an EVFILT_READ kevent with NOTE_EOF, and EV_WRITE into an EVFILT_WRITE kevent with NOTE_EOF.

  • EVBACKEND_DEVPOLL (值为 16,Solaris 8)
    这还没有实现(可能从不会实现了,除非你给我发一个实现)。根据报告,/dev/poll 只支持 sockets,且不是可嵌入的,这将大大限制这个后端的有用性。

  • EVBACKEND_PORT (值为 32,Solaris 10)
    This uses the Solaris 10 event port mechanism. As with everything on Solaris, it's really slow, but it still scales very well (O(active_fds)).
    While this backend scales well, it requires one system call per active file descriptor per loop iteration. For small and medium numbers of file descriptors a "slow" EVBACKEND_SELECT or EVBACKEND_POLL backend might perform better.
    On the positive side, this backend actually performed fully to specification in all tests and is fully embeddable, which is a rare feat among the OS-specific backends (I vastly prefer correctness over speed hacks).
    On the negative side, the interface is bizarre - so bizarre that even sun itself gets it wrong in their code examples: The event polling function sometimes returns events to the caller even though an error occurred, but with no indication whether it has done so or not (yes, it's even documented that way) - deadly for edge-triggered interfaces where you absolutely have to know whether an event occurred or not because you have to re-arm the watcher.
    Fortunately libev seems to be able to work around these idiocies.
    This backend maps EV_READ and EV_WRITE in the same way as EVBACKEND_POLL.

  • EVBACKEND_ALL
    尝试所有的后端(甚至是在用 EVFLAG_AUTO 时不会尝试的潜在的烂的那些)。由于它是一个掩码,你可以做一些事情,比如 EVBACKEND_ALL & ~EVBACKEND_KQUEUE
    绝对不推荐使用这个标志,使用 ev_recommended_backends() 返回的那些,或者简单地不指定后端。

  • EVBACKEND_MASK
    不是一个后端,一个用来从 flags 值中选择所有的后端位的掩码,在你想从标志值屏蔽任何后端的情况下(比如当修改 LIBEV_FLAGS 环境变量时)。

如果标记值中有一个或多个后端标记,则只会尝试这些后端(以这里列出的相反的顺序)。如果没有指定,则会尝试 ev_recommended_backends() 中的所有后端。

示例:尝试创建一个只使用 epoll 的事件循环。

struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV);
if (!epoller)
  fatal ("no epoll found here, maybe it hides under your chair");

示例:使用 libev 提供的,但确保在 kqueue 可用时使用了它。

struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);

ev_loop_destroy (loop)

销毁一个事件循环对象(释放所有的内存和内核状态等等)。没有一个活跃的观察者将在正常意义上停止,比如,ev_is_active 可能依然返回 true。在调用这个函数 之前 你自己干净地停止所有的观察者,或者事后处理它们(这通常是最简单的事情,比如你可以只是忽略观察者并/或 free() 它们)都是你的责任。

注意某些全局状态,比如信号状态(及安装的信号处理程序),将不会被这个函数释放,及相关的观察者(比如信号和 chile 观察者)将需要手动地停止。

这个函数通常作用于 ev_loop_new 分配的 loop 对象,但它也可以用于 ev_default_loop 返回的默认的 loop,只是在这种情况下不是线程安全的。

注意不建议对默认的 loop 调用这个函数,除了在极少的你真的需要释放它的资源的情况下。如果你需要动态地分配 loops,则最好使用 ev_loop_newev_loop_destroy

ev_loop_fork (loop)

This function sets a flag that causes subsequent ev_run iterations to reinitialise the kernel state for backends that have one. Despite the name, you can call it anytime you are allowed to start or stop watchers (except inside an ev_prepare callback), but it makes most sense after forking, in the child process. You must call it (or use EVFLAG_FORKCHECK) in the child before resuming or calling ev_run.

In addition, if you want to reuse a loop (via this function or EVFLAG_FORKCHECK), you also have to ignore SIGPIPE.

Again, you have to call it on any loop that you want to re-use after a fork, even if you do not plan to use the loop in the parent. This is because some kernel interfaces cough kqueue cough do funny things during fork.

On the other hand, you only need to call this function in the child process if and only if you want to use the event loop in the child. If you just fork+exec or create a new loop in the child, you don't have to call it at all (in fact, epoll is so badly broken that it makes a difference, but libev will usually detect this case on its own and do a costly reset of the backend).

The function itself is quite fast and it's usually not a problem to call it just in case after a fork.

Example: Automate calling ev_loop_fork on the default loop when using pthreads.

static void
post_fork_child (void)
{
  ev_loop_fork (EV_DEFAULT);
}
 
...
pthread_atfork (0, 0, post_fork_child);

int ev_is_default_loop (loop)

当给定的 loop 实际上是默认的 loop 时返回 ture,其它为 false。

unsigned int ev_iteration (loop)

返回事件循环当前迭代的次数,它与 libev 为新事件执行 poll 的次数一致。它从0 开始,并且愉快地包装足够的迭代。

This value can sometimes be useful as a generation counter of sorts (it "ticks" the number of loop iterations), as it roughly corresponds with ev_prepare and ev_check calls - and is incremented between the prepare and check phases.

unsigned int ev_depth (loop)

Returns the number of times ev_run was entered minus the number of times ev_run was exited normally, in other words, the recursion depth.

Outside ev_run, this number is zero. In a callback, this number is 1, unless ev_run was invoked recursively (or from another thread), in which case it is higher.

Leaving ev_run abnormally (setjmp/longjmp, cancelling the thread, throwing an exception etc.), doesn't count as "exit" - consider this as a hint to avoid such ungentleman-like behaviour unless it's really convenient, in which case it is fully supported.

unsigned int ev_backend (loop)

返回 EVBACKEND_* 标记中的一个,以指明使用的事件后端。

ev_tstamp ev_now (loop)

返回当前的 “事件循环时间”,它是事件循环接收事件并开始处理它们的时间。回调一被处理,这个时间戳就不会变了,它还是用于相对定时器的基时间。你可以把它当作事件发生(或更正确地说,libev 发现它)的时间。

ev_now_update (loop)

Establishes the current time by querying the kernel, updating the time returned by ev_now () in the progress. This is a costly operation and is usually done automatically within ev_run ().

This function is rarely useful, but when some event callback runs for a very long time without entering the event loop, updating libev's idea of the current time is a good idea.

See also "The special problem of time updates" in the ev_timer
section.

ev_suspend (loop)

ev_resume (loop)

这两个函数挂起并恢复一个事件循环,当 loop 有一段事件不用,且超时不应该被处理时使用。

典型的使用场景是交互式的程序,比如游戏:当用户按下 ^Z 挂起游戏,并在一小时后恢复,对于超时最好的处理是在程序挂起期间就像时间没有流逝一样。这可以通过在你的 SIGTSTP 处理程序中调用 ev_suspend,给你自己发送一个 SIGSTOP 并在之后直接调用 ev_resume 恢复定时器处理来实现。

Effectively, all ev_timer watchers will be delayed by the time spend between ev_suspend and ev_resume, and all ev_periodic watchers will be rescheduled (that is, they will lose any events that would have occurred while suspended). After calling ev_suspend you must not call any function on the given loop other than ev_resume, and you must not call ev_resume without a previous call to ev_suspend.

Calling ev_suspend/ev_resume has the side effect of updating the event loop time (see ev_now_update).

bool ev_run (loop, int flags)

Finally, this is it, the event handler. This function usually is called after you have initialised all your watchers and you want to start handling events. It will ask the operating system for any new events, call the watcher callbacks, and then repeat the whole process indefinitely: This is why event loops are called loops.

If the flags argument is specified as 0, it will keep handling events until either no event watchers are active anymore or ev_break was called.

The return value is false if there are no more active watchers (which usually means "all jobs done" or "deadlock"), and true in all other cases (which usually means " you should call ev_run again").

Please note that an explicit ev_break is usually better than relying on all watchers to be stopped when deciding when a program has finished (especially in interactive programs), but having a program that automatically loops as long as it has to and no longer by virtue of relying on its watchers stopping correctly, that is truly a thing of beauty.

This function is mostly exception-safe - you can break out of a ev_run call by calling longjmp in a callback, throwing a C++ exception and so on. This does not decrement the ev_depth value, nor will it clear any outstanding EVBREAK_ONE breaks.

A flags value of EVRUN_NOWAIT will look for new events, will handle those events and any already outstanding ones, but will not wait and block your process in case there are no events and will return after one iteration of the loop. This is sometimes useful to poll and handle new events while doing lengthy calculations, to keep the program responsive.

A flags value of EVRUN_ONCE will look for new events (waiting if necessary) and will handle those and any already outstanding ones. It will block your process until at least one new event arrives (which could be an event internal to libev itself, so there is no guarantee that a user-registered callback will be called), and will return after one iteration of the loop.

This is useful if you are waiting for some external event in conjunction with something not expressible using other libev watchers (i.e. "roll your own ev_run"). However, a pair of ev_prepare/ev_check watchers is usually a better approach for this kind of thing.

Here are the gory details of what ev_run does (this is for your understanding, not a guarantee that things will work exactly like this in future versions):

- Increment loop depth.
- Reset the ev_break status.
- Before the first iteration, call any pending watchers.
LOOP:
- If EVFLAG_FORKCHECK was used, check for a fork.
- If a fork was detected (by any means), queue and call all fork watchers.
- Queue and call all prepare watchers.
- If ev_break was called, goto FINISH.
- If we have been forked, detach and recreate the kernel state
  as to not disturb the other process.
- Update the kernel state with all outstanding changes.
- Update the "event loop time" (ev_now ()).
- Calculate for how long to sleep or block, if at all
  (active idle watchers, EVRUN_NOWAIT or not having
  any active watchers at all will result in not sleeping).
- Sleep if the I/O and timer collect interval say so.
- Increment loop iteration counter.
- Block the process, waiting for any events.
- Queue all outstanding I/O (fd) events.
- Update the "event loop time" (ev_now ()), and do time jump adjustments.
- Queue all expired timers.
- Queue all expired periodics.
- Queue all idle watchers with priority higher than that of pending events.
- Queue all check watchers.
- Call all queued watchers in reverse order (i.e. check watchers first).
  Signals and child watchers are implemented as I/O watchers, and will
  be handled here by queueing them when their watcher gets executed.
- If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT
  were used, or there are no active watchers, goto FINISH, otherwise
  continue with step LOOP.
FINISH:
- Reset the ev_break status iff it was EVBREAK_ONE.
- Decrement the loop depth.
- Return.

Example: Queue some jobs and then loop until no events are outstanding anymore.

... queue jobs here, make sure they register event watchers as long
... as they still have work to do (even an idle watcher will do..)
ev_run (my_loop, 0);
... jobs done or somebody called break. yeah!

ev_break (loop, how)

可被用于执行一个调用使 ev_run 提前返回(但是只有在其处理完了所有outstanding 事件之后)。其中的 how 参数必须是 EVBREAK_ONE,它使最内层的 ev_run 返回,或者是 EVBREAK_ALL,它使所有嵌套的 ev_run 返回。

这个 "break 状态" 将在下次调用 ev_run 时被清除。

在任何 ev_run 调用之外调用 ev_break 也是安全的,只是在那种情况下不起作用。

ev_ref (loop)

ev_unref (loop)

Ref/unref 可以被用于添加或移除一个事件循环的引用计数。每个观察者持有一个引用,只要引用计数为非零,ev_run 就不会自己返回。

当你有一个你从不想注销的观察者时,它很有用,但是,尽管如此,不应该使 ev_run 不能返回。在这种情况下,在 ev_ref 之前调用 ev_unref 启动之后将停止它。

As an example, libev itself uses this for its internal signal pipe: It is not visible to the libev user and should not keep ev_run from exiting if no event watchers registered by it are active. It is also an excellent way to do this for generic recurring timers or from within third-party libraries. Just remember to unref after start and ref before stop (but only if the watcher wasn't active before, or was active before, respectively. Note also that libev might stop watchers itself (e.g. non-repeating timers) in which case you have to ev_ref in the callback).

Example: Create a signal watcher, but keep it from keeping ev_run running when nothing else is active.

ev_signal exitsig;
ev_signal_init (&exitsig, sig_cb, SIGINT);
ev_signal_start (loop, &exitsig);
ev_unref (loop);

Example: For some weird reason, unregister the above signal handler again.

ev_ref (loop);
ev_signal_stop (loop, &exitsig);

ev_set_io_collect_interval (loop, ev_tstamp interval)

ev_set_timeout_collect_interval (loop, ev_tstamp interval)

These advanced functions influence the time that libev will spend waiting for events. Both time intervals are by default 0, meaning that libev will try to invoke timer/periodic callbacks and I/O callbacks with minimum latency.
Setting these to a higher value (the interval must be >= 0) allows libev to delay invocation of I/O and timer/periodic callbacks to increase efficiency of loop iterations (or to increase power-saving opportunities).
The idea is that sometimes your program runs just fast enough to handle one (or very few) event(s) per loop iteration. While this makes the program responsive, it also wastes a lot of CPU time to poll for new events, especially with backends like select () which have a high overhead for the actual polling but can deliver many events at once.
By setting a higher io collect interval you allow libev to spend more time collecting I/O events, so you can handle more events per iteration, at the cost of increasing latency. Timeouts (both ev_periodic and ev_timer) will not be affected. Setting this to a non-null value will introduce an additional ev_sleep () call into most loop iterations. The sleep time ensures that libev will not poll for I/O events more often then once per this interval, on average (as long as the host time resolution is good enough).
Likewise, by setting a higher timeout collect interval you allow libev to spend more time collecting timeouts, at the expense of increased latency/jitter/inexactness (the watcher callback will be called later). ev_io watchers will not be affected. Setting this to a non-null value will not introduce any overhead in libev.
Many (busy) programs can usually benefit by setting the I/O collect interval to a value near 0.1 or so, which is often enough for interactive servers (of course not for games), likewise for timeouts. It usually doesn't make much sense to set it to a lower value than 0.01, as this approaches the timing granularity of most systems. Note that if you do transactions with the outside world and you can't increase the parallelity, then this setting will limit your transaction rate (if you need to poll once per transaction and the I/O collect interval is 0.01, then you can't do more than 100 transactions per second).
Setting the timeout collect interval can improve the opportunity for saving power, as the program will "bundle" timer callback invocations that are "near" in time together, by delaying some, thus reducing the number of times the process sleeps and wakes up again. Another useful technique to reduce iterations/wake-ups is to use ev_periodic
watchers and make sure they fire on, say, one-second boundaries only.
Example: we only need 0.1s timeout granularity, and we wish not to poll more often than 100 times per second:

ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1);
ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01);

ev_invoke_pending (loop)

This call will simply invoke all pending watchers while resetting their pending state. Normally, ev_run does this automatically when required, but when overriding the invoke callback this call comes handy. This function can be invoked from a watcher - this can be useful for example when you want to do some lengthy calculation and want to pass further event handling to another thread (you still have to make sure only one thread executes within ev_invoke_pending or ev_run of course).

int ev_pending_count (loop)

返回挂起的观察者的个数 - 零表示没有观察者挂起。

ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P))

This overrides the invoke pending functionality of the loop: Instead of invoking all pending watchers when there are any, ev_run will call this callback instead. This is useful, for example, when you want to invoke the actual watchers inside another context (another thread etc.).
If you want to reset the callback, use ev_invoke_pending as new callback.

ev_set_loop_release_cb (loop, void (release)(EV_P) throw (), void (acquire)(EV_P) throw ())

Sometimes you want to share the same loop between multiple threads. This can be done relatively simply by putting mutex_lock/unlock calls around each call to a libev function.
However, ev_run can run an indefinite time, so it is not feasible to wait for it to return. One way around this is to wake up the event loop via ev_breakand ev_async_send, another way is to set these release and acquirecallbacks on the loop.
When set, then release will be called just before the thread is suspended waiting for new events, and acquire is called just afterwards.

Ideally, release will just call your mutex_unlock function, and acquire
will just call the mutex_lock function again.

While event loop modifications are allowed between invocations of release and acquire (that's their only purpose after all), no modifications done will affect the event loop, i.e. adding watchers will have no effect on the set of file descriptors being watched, or the time waited. Use an ev_async watcher to wake up ev_run when you want it to take note of any changes you made.

In theory, threads executing ev_run will be async-cancel safe between invocations of release and acquire.

See also the locking example in the THREADS section later in this document.

ev_set_userdata (loop, void *data)

void *ev_userdata (loop)

设置和提取与一个循环关联的 void * 。当从来没有调用过 ev_set_userdata 时,ev_userdata 返回 0。

这两个函数可以用于把任意数据与 loop 关联,并且仅用于invoke_pending_cb, 释放和获取上面描述的回调,但是当然也可以被(滥)用于其它的一些目的。

ev_verify (loop)
This function only does something when EV_VERIFY
support has been compiled in, which is the default for non-minimal builds. It tries to go through all internal structures and checks them for validity. If anything is found to be inconsistent, it will print an error message to standard error and call abort ()
.
This can be used to catch bugs inside libev itself: under normal circumstances, this function will never abort as of course libev keeps its data structures consistent.

观察者解剖

在下面的描述中,名字里大写的 TYPE 代表观察者的类型,比如 ev_TYPE_start 可能意味着,对于定时器观察者表示 ev_timer_start 及对于 I/O 观察者表示 ev_io_start

观察者是你分配并注册来记录你感兴趣的一些事件的一个不透明的结构。为了创建一个具体的例子,想象你想要等待 STDIN 变得可读,你将为其创建一个 ev_io 观察者:

static void my_cb (struct ev_loop *loop, ev_io *w, int revents)
{
  ev_io_stop (w);
  ev_break (loop, EVBREAK_ALL);
}
 
struct ev_loop *loop = ev_default_loop (0);
 
ev_io stdin_watcher;
 
ev_init (&stdin_watcher, my_cb);
ev_io_set (&stdin_watcher, STDIN_FILENO, EV_READ);
ev_io_start (loop, &stdin_watcher);
 
ev_run (loop, 0);

如你所见,你负责为你的观察者结构分配内存(在栈上分配内存 通常 都不是个好主意)。

每一个观察者有一个与其关联的观察者结构(称为 struct ev_TYPE 或简称为 ev_TYPE, 如为所有的观察者结构提供的 typedefs)。

每个观察者结构必须通过调用 ev_init (watcher *, callback) 来初始化,这个调用需要传入一个回调。每次在事件发生时,这个回调会被调到(或者在 I/O 观察者的情况中,每次事件循环探测到给定的文件描述符可读和/或可写的时候)。

每个观察者类型都还有它自己的 ev_TYPE_set (watcher *, ...) 宏来配置它,参数列表依赖于观察者类型。还有一个宏在一个调用中结合了初始化和设置:ev_TYPE_init (watcher *, callback, ...)

为了使观察者实际关注事件,你需要用一个观察者特有的启动函数
(ev_TYPE_start (loop, watcher *)) 启动它,你可以在任何时间通过调用对应的停止函数 (ev_TYPE_stop (loop, watcher *) 停止观察事件。

只要你的观察者处于活跃状态(已经启动但还没有停止),你一定不能动它里面存储的值。更具体地说,你一定不能重新初始化它,或调用它的 ev_TYPE_set 宏。

每个回调都接收 event loop 指针作为它的第一个参数,注册的观察者结构体为第二个,接收的事件的位集合为第三个参数。

接收的事件通常为每个接收的事件类型包含一个位(你可以在同一时间接收多个事件)。可能的位掩码为:

EV_READ
EV_WRITE
ev_io 观察者中的文件描述符已经变得可读和/或可写。

EV_TIMER
ev_timer 观察者已经超时。

EV_PERIODIC
ev_periodic 观察者已经超时。

EV_SIGNAL
ev_signal 观察者中指定的信号已经由一个线程接收到。

EV_CHILD
ev_child 观察者中指定的 pid 已经接收到一个状态改变。

EV_STAT
ev_stat 观察者中指定的路径以某种方式改变了其属性。

EV_IDLE
ev_idle 观察者已经决定,你没有其它更好的事情要做。

EV_PREPARE
EV_CHECK
所有的 ev_prepare 观察者仅在 ev_run 开始收集新事件 之前 调用,而所有的 ev_check 观察者仅在 ev_run 已经收集到了它们之后,但在任何接收到的事件的回调入队之前,被加入队列(而不是调用)。这意味着 ev_prepare 观察者是在事件循环休眠或为新事件而 poll 之前最后被调用的观察者,而 ev_check 观察者将在一个事件循环迭代内任何其它相同或更低优先级的观察者之前被调用。

这两种观察者类型的回调可以启动或停止任何数量它们想要的观察者,所有这些都将被考虑在内(比如,ev_prepare 观察者可能启动一个 idle 观察者来保持
ev_run 不被阻塞)。

EV_EMBED
ev_embed 观察者中指定的嵌入式事件循环需要注意。

EV_FORK
子线程中 fork 之后事件循环已经恢复(参考 ev_fork)。

EV_CLEANUP
事件循环将被销毁(参考 ev_cleanup)。

EV_ASYNC
给定的 async 观察者已经被异步地通知了(参考 ev_async)。

EV_CUSTOM
不是 libev 自身发送(或另外使用)的事件,但可以被 libev 的用户自由地用来通知观察者(比如,通过 ev_feed_event)。

EV_ERROR

发生未指定的错误,观察者已被停止。这可能发生在由于 libev 内存不足而观察者无法正常启动,发现一个文件描述符已经关闭,或其它问题。Libev 认为这些是应用程序的错误。

You best act on it by reporting the problem and somehow coping with the watcher being stopped. Note that well-written programs should not receive an error ever, so when your watcher receives it, this usually indicates a bug in your program.

Libev will usually signal a few "dummy" events together with an error, for example it might indicate that a fd is readable or writable, and if your callbacks is well-written it can just attempt the operation and cope with the error from read() or write(). This will not work in multi-threaded programs, though, as the fd could already be closed and reused for another thing, so beware.

通用观察者函数

ev_init (ev_TYPE *watcher, callback)

这个宏初始化观察者的通用部分。观察者对象的内容可以是任意的(所以 malloc 会做)。只有观察者的通用部分被初始化,你 需要 在之后调用类型特有的 ev_TYPE_set 宏来初始化类型特有的部分。对于每一个类型,还有一个 ev_TYPE_init 可以把这两个调用合为一个。

你可以在任何时间重新初始化一个观察者,只要它已经停止(或从未启动),且没有挂起事件。

回调的类型总是 void (*)(struct ev_loop *loop, ev_TYPE *watcher, int revents)

示例:两步初始化一个 ev_io 观察者:

ev_io w;
ev_init (&w, my_cb);
ev_io_set (&w, STDIN_FILENO, EV_READ);

ev_TYPE_set (ev_TYPE *watcher, [args])

This macro initialises the type-specific parts of a watcher. You need to call ev_init at least once before you call this macro, but you can call ev_TYPE_set any number of times. You must not, however, call this macro on a watcher that is active (it can be pending, however, which is a difference to the ev_init macro).

Although some watcher types do not have type-specific arguments (e.g. ev_prepare) you still need to call its set macro.

参考 ev_init,上面的例子。

ev_TYPE_init (ev_TYPE *watcher, callback, [args])

This convenience macro rolls both ev_init and ev_TYPE_set
macro calls into a single call. This is the most convenient method to initialise a watcher. The same limitations apply, of course.

示例:一步初始化并设置一个 ev_io 观察者。

ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ);

ev_TYPE_start (loop, ev_TYPE *watcher)

启动(激活)给定的观察者。只有活跃的观察者可以接收事件。如果观察者已经处于活跃状态,则什么也不做。

示例:启动在这整个部分被滥用的 ev_io 监视器。

ev_io_start (EV_DEFAULT_UC, &w);

ev_TYPE_stop (loop, ev_TYPE *watcher)

如果处于活跃状态就停止给定的观察者,并清除 pending 状态(观察者是否处于活跃状态)。

It is possible that stopped watchers are pending - for example, non-repeating timers are being stopped when they become pending - but calling ev_TYPE_stop ensures that the watcher is neither active nor pending. If you want to free or reuse the memory used by the watcher it is therefore a good idea to always call its ev_TYPE_stop
function.

bool ev_is_active (ev_TYPE *watcher)

Returns a true value iff the watcher is active (i.e. it has been started and not yet been stopped). As long as a watcher is active you must not modify it.

bool ev_is_pending (ev_TYPE *watcher)

Returns a true value iff the watcher is pending, (i.e. it has outstanding events but its callback has not yet been invoked). As long as a watcher is pending (but not active) you must not call an init function on it (but ev_TYPE_set is safe), you must not change its priority, and you must make sure the watcher is available to libev (e.g. you cannot free () it).

callback ev_cb (ev_TYPE *watcher)

返回当前设置的观察者回调。

ev_set_cb (ev_TYPE *watcher, callback)

修改回调。你可以修改回调。您几乎可以随时修改回调(模块化线程)。

ev_set_priority (ev_TYPE *watcher, int priority)

int ev_priority (ev_TYPE *watcher)

Set and query the priority of the watcher. The priority is a small integer between EV_MAXPRI (default: 2) and EV_MINPRI (default: -2). Pending watchers with higher priority will be invoked before watchers with lower priority, but priority will not keep watchers from being executed (except for ev_idle watchers).

If you need to suppress invocation when higher priority events are pending you need to look at ev_idle watchers, which provide this functionality.

You must not change the priority of a watcher as long as it is active or pending.

Setting a priority outside the range of EV_MINPRI to EV_MAXPRI
is fine, as long as you do not mind that the priority value you query might or might not have been clamped to the valid range.

The default priority used by watchers when no priority has been set is always 0, which is supposed to not be too high and not be too low :).

See "WATCHER PRIORITY MODELS", below, for a more thorough treatment of priorities.

ev_invoke (loop, ev_TYPE *watcher, int revents)

Invoke the watcher with the given loop and revents. Neither loop nor revents need to be valid as long as the watcher callback can deal with that fact, as both are simply passed through to the callback.

int ev_clear_pending (loop, ev_TYPE *watcher)

If the watcher is pending, this function clears its pending status and returns its revents bitset (as if its callback was invoked). If the watcher isn't pending it does nothing and returns 0.

Sometimes it can be useful to "poll" a watcher instead of waiting for its callback to be invoked, which can be accomplished with this function.

ev_feed_event (loop, ev_TYPE *watcher, int revents)

Feeds the given event set into the event loop, as if the specified event had happened for the specified watcher (which must be a pointer to an initialised but not necessarily started event watcher). Obviously you must not free the watcher as long as it has pending events.

Stopping the watcher, letting libev invoke it, or calling ev_clear_pending will clear the pending event, even if the watcher was not started in the first place.

See also ev_feed_fd_event and ev_feed_signal_event
for related functions that do not need a watcher.

观察者状态

这份手册中提到了多种观察者状态 - active,pending 等等。本节将更详细地描述这些状态以及它们之间做转换的规则 - 尽管这些规则可能看起来很复杂,但它们通常都 “做正确的事”。

  • initialised
    在观察者可以被注册进事件循环之前,它必须处于 initialised 状态。这可以通过调用一次 ev_TYPE_init 或先调用 ev_init 然后调用观察者特有的 ev_TYPE_set 函数来完成,
    处于这种状态下,它仅仅是一些适合事件循环使用的内存块。它可以根据需要被移动,释放,复用等 - 只要你保持内存内容不变,或者再次调用 ev_TYPE_init

  • started/running/active
    一旦观察者已经通过调用 ev_TYPE_start 启动了,则它就变成了事件循环的属性,并活跃地等待事件。尽管在这种状态下它不能被访问(除了一些文档化的方式),移动,释放或其它操作 - 仅有的合法的事情是持有一个指向它的指针,并调用一些允许在活跃的观察者上调用的 libev 函数。

  • pending
    If a watcher is active and libev determines that an event it is interested in has occurred (such as a timer expiring), it will become pending. It will stay in this pending state until either it is stopped or its callback is about to be invoked, so it is not normally pending inside the watcher callback.

The watcher might or might not be active while it is pending (for example, an expired non-repeating timer can be pending but no longer active). If it is stopped, it can be freely accessed (e.g. by calling ev_TYPE_set), but it is still property of the event loop at this time, so cannot be moved, freed or reused. And if it is active the rules described in the previous item still apply.

It is also possible to feed an event on a watcher that is not active (e.g. via ev_feed_event), in which case it becomes pending without being active.

  • stopped
    观察者可被 libev 隐式地停止(在这种情况中它可能依然处于挂起状态),或通过调用它的 ev_TYPE_stop 函数显式地停止。后者将清除任何观察者可能处于的挂起状态,无论它是否处于活跃状态,因此在释放一个观察者前显式地停止它常常是一个好主意。
    停止的(不是挂起)观察者本质上是处于初始化状态的,即它可以以任何你希望的方式复用,移动,修改(但当你废弃了内存块时,你需要再次 ev_TYPE_init 它)。

观察者优先级模型

许多事件循环都支持 观察者优先级,这通常是一个以某些方式影响事件回调间调用顺序的小整数,其它条件一样。

在 libev 中,观察者优先级可以使用 ev_set_priority 进行设置。参考它的描述获得更多技术细节,比如实际的优先级范围。

事件循环如何解释这些优先级,有两种常见的方式。

在更常见的锁定模型中,更高优先级的 “锁定” 对更低优先级的观察者的调用,这意味着一旦更高优先级的观察者收到事件,更低优先级的观察者将不会被调用。

The less common only-for-ordering model uses priorities solely to order callback invocation within a single event loop iteration: Higher priority watchers are invoked before lower priority ones, but they all get invoked before polling for new events.

Libev uses the second (only-for-ordering) model for all its watchers except for idle watchers (which use the lock-out model).

The rationale behind this is that implementing the lock-out model for watchers is not well supported by most kernel interfaces, and most event libraries will just poll for the same events again and again as long as their callbacks have not been executed, which is very inefficient in the common case of one high-priority watcher locking out a mass of lower priority ones.

Static (ordering) priorities are most useful when you have two or more watchers handling the same resource: a typical usage example is having an ev_io watcher to receive data, and an associated ev_timer to handle timeouts. Under load, data might be received while the program handles other jobs, but since timers normally get invoked first, the timeout handler will be executed before checking for data. In that case, giving the timer a lower priority than the I/O watcher ensures that I/O will be handled first even under adverse conditions (which is usually, but not always, what you want).

Since idle watchers use the "lock-out" model, meaning that idle watchers will only be executed when no same or higher priority watchers have received events, they can be used to implement the "lock-out" model when required.

For example, to emulate how many other event libraries handle priorities, you can associate an ev_idle watcher to each such watcher, and in the normal watcher callback, you just start the idle watcher. The real processing is done in the idle watcher callback. This causes libev to continuously poll and process kernel event data for the watcher, but when the lock-out case is known to be rare (which in turn is rare :), this is workable.

Usually, however, the lock-out model implemented that way will perform miserably under the type of load it was designed to handle. In that case, it might be preferable to stop the real watcher before starting the idle watcher, so the kernel will not have to process the event in case the actual processing will be delayed for considerable time.

Here is an example of an I/O watcher that should run at a strictly lower priority than the default, and which should only process data when no other events are pending:

ev_idle idle; // actual processing watcher
ev_io io;     // actual event watcher
 
static void
io_cb (EV_P_ ev_io *w, int revents)
{
  // stop the I/O watcher, we received the event, but
  // are not yet ready to handle it.
  ev_io_stop (EV_A_ w);
 
  // start the idle watcher to handle the actual event.
  // it will not be executed as long as other watchers
  // with the default priority are receiving events.
  ev_idle_start (EV_A_ &idle);
}
 
static void
idle_cb (EV_P_ ev_idle *w, int revents)
{
  // actual processing
  read (STDIN_FILENO, ...);
 
  // have to start the I/O watcher again, as
  // we have handled the event
  ev_io_start (EV_P_ &io);
}
 
// initialisation
ev_idle_init (&idle, idle_cb);
ev_io_init (&io, io_cb, STDIN_FILENO, EV_READ);
ev_io_start (EV_DEFAULT_ &io);

In the "real" world, it might also be beneficial to start a timer, so that low-priority connections can not be locked out forever under load. This enables your program to keep a lower latency for important connections during short periods of high load, while not completely locking out less important ones.

原文

Done.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,835评论 4 364
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,598评论 1 295
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 109,569评论 0 244
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,159评论 0 213
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,533评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,710评论 1 222
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,923评论 2 313
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,674评论 0 203
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,421评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,622评论 2 245
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,115评论 1 260
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,428评论 2 254
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,114评论 3 238
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,097评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,875评论 0 197
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,753评论 2 276
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,649评论 2 271

推荐阅读更多精彩内容