libev 观察者类型

观察者类型

This section describes each watcher in detail, but will not repeat information given in the last section. Any initialisation/set macros, functions and members specific to the watcher type are explained.
Members are additionally marked with either [read-only], meaning that, while the watcher is active, you can look at the member and expect some sensible content, but you must not modify it (you can modify it while the watcher is stopped to your hearts content), or [read-write], which means you can expect it to have some sensible content while the watcher is active, but you can also modify it. Modifying it may not do something sensible or take immediate effect (or do anything at all), but libev will not crash or malfunction in any way.

ev_io - is this file descriptor readable or writable?

I/O watchers check whether a file descriptor is readable or writable in each iteration of the event loop, or, more precisely, when reading would not block the process and writing would at least be able to write some data. This behaviour is called level-triggering because you keep receiving events as long as the condition persists. Remember you can stop the watcher if you don't want to act on the event and neither want to receive future events.

In general you can register as many read and/or write event watchers per fd as you want (as long as you don't confuse yourself). Setting all file descriptors to non-blocking mode is also usually a good idea (but not required if you know what you are doing).

Another thing you have to watch out for is that it is quite easy to receive "spurious" readiness notifications, that is, your callback might be called with EV_READ but a subsequent read(2) will actually block because there is no data. It is very easy to get into this situation even with a relatively standard program structure. Thus it is best to always use non-blocking I/O: An extra read(2) returning EAGAIN is far preferable to a program hanging until some data arrives.

If you cannot run the fd in non-blocking mode (for example you should not play around with an Xlib connection), then you have to separately re-test whether a file descriptor is really ready with a known-to-be good interface such as poll (fortunately in the case of Xlib, it already does this on its own, so its quite safe to use). Some people additionally use SIGALRM and an interval timer, just to be sure you won't block indefinitely.

But really, best use non-blocking mode.

消失的文件描述符的特殊问题

Some backends (e.g. kqueue, epoll) need to be told about closing a file descriptor (either due to calling close
explicitly or any other means, such as dup2). The reason is that you register interest in some file descriptor, but when it goes away, the operating system will silently drop this interest. If another file descriptor with the same number then is registered with libev, there is no efficient way to see that this is, in fact, a different file descriptor.

To avoid having to explicitly tell libev about such cases, libev follows the following policy: Each time ev_io_set is being called, libev will assume that this is potentially a new file descriptor, otherwise it is assumed that the file descriptor stays the same. That means that you have to call ev_io_set (or ev_io_init) when you change the descriptor even if the file descriptor number itself did not change.

This is how one would do it normally anyway, the important point is that the libev application should not optimise around libev but should leave optimisations to libev.

dup 的文件描述符的特殊问题

Some backends (e.g. epoll), cannot register events for file descriptors, but only events for the underlying file descriptions. That means when you have dup ()'ed file descriptors or weirder constellations, and register events for them, only one file descriptor might actually receive events.

There is no workaround possible except not registering events for potentially dup ()'ed file descriptors, or to resort to EVBACKEND_SELECT or EVBACKEND_POLL.

文件的特殊问题

Many people try to use select (or libev) on file descriptors representing files, and expect it to become ready when their program doesn't block on disk accesses (which can take a long time on their own).

However, this cannot ever work in the "expected" way - you get a readiness notification as soon as the kernel knows whether and how much data is there, and in the case of open files, that's always the case, so you always get a readiness notification instantly, and your read (or possibly write) will still block on the disk I/O.

Another way to view it is that in the case of sockets, pipes, character devices and so on, there is another party (the sender) that delivers data on its own, but in the case of files, there is no such thing: the disk will not send data on its own, simply because it doesn't know what you wish to read - you would first have to request some data.

Since files are typically not-so-well supported by advanced notification mechanism, libev tries hard to emulate POSIX behaviour with respect to files, even though you should not use it. The reason for this is convenience: sometimes you want to watch STDIN or STDOUT, which is usually a tty, often a pipe, but also sometimes files or special devices (for example, epoll on Linux works with /dev/random but not with /dev/urandom), and even though the file might better be served with asynchronous I/O instead of with non-blocking I/O, it is still useful when it "just works" instead of freezing.

So avoid file descriptors pointing to files when you know it (e.g. use libeio), but use them when it is convenient, e.g. for STDIN/STDOUT, or when you rarely read from a file instead of from a socket, and want to reuse the same code path.

fork 的特殊问题

Some backends (epoll, kqueue) do not support fork ()
at all or exhibit useless behaviour. Libev fully supports fork, but needs to be told about it in the child if you want to continue to use it in the child.

To support fork in your child processes, you have to call ev_loop_fork () after a fork in the child, enable EVFLAG_FORKCHECK, or resort to EVBACKEND_SELECT or EVBACKEND_POLL.

SIGPIPE 的特殊问题

While not really specific to libev, it is easy to forget about SIGPIPE: when writing to a pipe whose other end has been closed, your program gets sent a SIGPIPE, which, by default, aborts your program. For most programs this is sensible behaviour, for daemons, this is usually undesirable.

So when you encounter spurious, unexplained daemon exits, make sure you ignore SIGPIPE (and maybe make sure you log the exit status of your daemon somewhere, as that would have given you a big clue).

当你无法 accept()ing 时的特殊问题

POSIX accept 函数的许多实现(比如,2004 年之后的 Linux)具有在所有错误情况下不从挂起队列中移除连接的特殊的行为。

比如,较大的服务器常常用光文件描述符(由于资源限制),导致 acceptENFILE 失败但不拒绝连接,导致libev在下一次迭代时通知可读性(毕竟连接依然存在),典型地导致程序循环消耗 100% 的 CPU。

不幸地是,导致这个问题的错误的集合在不同操作系统间是不同的,通常很少有应用程序能够修复这种情形,且没有已知的线程安全的方法来移除连接以应对过载(对我而言)。

处理这种情况最简单的一种方式是忽略它 - 当程序遇到过载时,它仅仅是继续循环,直到情形结束。这是一种忙等待的形式,没有 OS 提供了基于事件的方式来处理这种情形,因此它是可做的最好的。

一种处理这种情形更好的方式是记录除 EAGAINEWOULDBLOCK 之外的任何其它错误,确保不要用这样的消息来洪泛日志,并照常继续,这至少给用户一个可能是错误的想法
("触发 ulimit!")。额外可以做的是,可以停止监听的 ev_io 观察者的 fd “一会儿”,这样可以减少CPU的使用。

如果你的程序是单线程的,则你也可以用一个 dummy 文件描述符记录过载的情形(比如打开 /dev/null),当你运行到 ENFILEEMFILE 时,关闭它,运行 accept,关闭那个 fd,并创建一个新的 dummy fd。这将在典型的过载条件下优雅地拒绝客户端。

最后一种处理它的方法是简单地记录错误并 exit,就像在 malloc 失败时常做的那样,但这将导致 DoS 攻击更容易。

观察者特有函数

ev_io_init (ev_io *, callback, int fd, int events)

ev_io_set (ev_io *, int fd, int events)

配置一个 ev_io 观察者。fd 是接收事件的文件描述符,events 是 EV_READ,EV_WRITE 或 EV_READ | EV_WRITE,表示想要接收给定的事件。

int fd [只读]

被观察的文件描述符。

int events [read-only]

被观察的事件。

示例

示例:当 STDIN_FILENO 变得可读时调用 stdin_readable_cb,只有一次。由于它可能是行缓冲的,你可以尝试在回调中读取一整行。

static void
stdin_readable_cb (struct ev_loop *loop, ev_io *w, int revents)
{
   ev_io_stop (loop, w);
  .. read from stdin here (or from w->fd) and handle any I/O errors
}
 
. . .
struct ev_loop *loop = ev_default_init (0);
ev_io stdin_readable;
ev_io_init (&stdin_readable, stdin_readable_cb, STDIN_FILENO, EV_READ);
ev_io_start (loop, &stdin_readable);
ev_run (loop, 0);

ev_timer - 相对的及可选的重复的超时

Timer watchers are simple relative timers that generate an event after a given time, and optionally repeating in regular intervals after that.

The timers are based on real time, that is, if you register an event that times out after an hour and you reset your system clock to January last year, it will still time out after (roughly) one hour. "Roughly" because detecting time jumps is hard, and some inaccuracies are unavoidable (the monotonic clock option helps a lot here).

The callback is guaranteed to be invoked only after its timeout has passed (not at, so on systems with very low-resolution clocks this might introduce a small delay, see "the special problem of being too early", below). If multiple timers become ready during the same loop iteration then the ones with earlier time-out values are invoked before ones of the same priority with later time-out values (but this is no longer true when a callback calls ev_run recursively).

Be smart about timeouts

Many real-world problems involve some kind of timeout, usually for error recovery. A typical example is an HTTP request - if the other side hangs, you want to raise some error after a while.
What follows are some ways to handle this problem, from obvious and inefficient to smart and efficient.
In the following, a 60 second activity timeout is assumed - a timeout that gets reset to 60 seconds each time there is activity (e.g. each time some data or other life sign was received).

  • 1. Use a timer and stop, reinitialise and start it on activity.
    This is the most obvious, but not the most simple way: In the beginning, start the watcher:
ev_timer_init (timer, callback, 60., 0.);
ev_timer_start (loop, timer);

Then, each time there is some activity, ev_timer_stop it, initialise it and start it again:

ev_timer_stop (loop, timer);
ev_timer_set (timer, 60., 0.);
ev_timer_start (loop, timer);

This is relatively simple to implement, but means that each time there is some activity, libev will first have to remove the timer from its internal data structure and then add it again. Libev tries to be fast, but it's still not a constant-time operation.

  • 2. Use a timer and re-start it with ev_timer_again
    inactivity.

    This is the easiest way, and involves using ev_timer_again instead of ev_timer_start.

To implement this, configure an ev_timer with a repeat value of 60 and then call ev_timer_again at start and each time you successfully read or write some data. If you go into an idle state where you do not expect data to travel on the socket, you can ev_timer_stop the timer, and ev_timer_again will automatically restart it if need be.

That means you can ignore both the ev_timer_start function and the after argument to ev_timer_set, and only ever use the repeat
member and ev_timer_again.

At start:

ev_init (timer, callback);
timer->repeat = 60.;
ev_timer_again (loop, timer);

Each time there is some activity:

ev_timer_again (loop, timer);

It is even possible to change the time-out on the fly, regardless of whether the watcher is active or not:

timer->repeat = 30.;
ev_timer_again (loop, timer);

This is slightly more efficient then stopping/starting the timer each time you want to modify its timeout value, as libev does not have to completely remove and re-insert the timer from/into its internal data structure.

It is, however, even simpler than the "obvious" way to do it.

  • 3. Let the timer time out, but then re-arm it as required.
    This method is more tricky, but usually most efficient: Most timeouts are relatively long compared to the intervals between other activity - in our example, within 60 seconds, there are usually many I/O events with associated activity resets.

In this case, it would be more efficient to leave the ev_timer alone, but remember the time of last activity, and check for a real timeout only within the callback:

ev_tstamp timeout = 60.;
ev_tstamp last_activity; // time of last activity
ev_timer timer;
 
static void
callback (EV_P_ ev_timer *w, int revents)
{
  // calculate when the timeout would happen
  ev_tstamp after = last_activity - ev_now (EV_A) + timeout;
 
  // if negative, it means we the timeout already occurred
  if (after < 0.)
    {
      // timeout occurred, take action
    }
  else
    {
      // callback was invoked, but there was some recent 
      // activity. simply restart the timer to time out
      // after "after" seconds, which is the earliest time
      // the timeout can occur.
      ev_timer_set (w, after, 0.);
      ev_timer_start (EV_A_ w);
    }
}

To summarise the callback: first calculate in how many seconds the timeout will occur (by calculating the absolute time when it would occur, last_activity + timeout, and subtracting the current time, ev_now (EV_A) from that).

If this value is negative, then we are already past the timeout, i.e. we timed out, and need to do whatever is needed in this case.

Otherwise, we now the earliest time at which the timeout would trigger, and simply start the timer with this timeout value.

In other words, each time the callback is invoked it will check whether the timeout occurred. If not, it will simply reschedule itself to check again at the earliest time it could time out. Rinse. Repeat.

This scheme causes more callback invocations (about one every 60 seconds minus half the average time between activity), but virtually no calls to libev to change the timeout.

To start the machinery, simply initialise the watcher and set last_activity to the current time (meaning there was some activity just now), then call the callback, which will "do the right thing" and start the timer:

last_activity = ev_now (EV_A);
ev_init (&timer, callback);
callback (EV_A_ &timer, 0);

When there is some activity, simply store the current time in last_activity, no libev calls at all:

if (activity detected)
  last_activity = ev_now (EV_A);

When your timeout value changes, then the timeout can be changed by simply providing a new value, stopping the timer and calling the callback, which will again do the right thing (for example, time out immediately :).

timeout = new_value;
ev_timer_stop (EV_A_ &timer);
callback (EV_A_ &timer, 0);

This technique is slightly more complex, but in most cases where the time-out is unlikely to be triggered, much more efficient.

  • 4. Wee, just use a double-linked list for your timeouts.
    If there is not one request, but many thousands (millions...), all employing some kind of timeout with the same timeout value, then one can do even better:

When starting the timeout, calculate the timeout value and put the timeout at the end of the list.

Then use an ev_timer to fire when the timeout at the beginning of the list is expected to fire (for example, using the technique #3).

When there is some activity, remove the timer from the list, recalculate the timeout, append it to the end of the list again, and make sure to update the ev_timer if it was taken from the beginning of the list.

This way, one can manage an unlimited number of timeouts in O(1) time for starting, stopping and updating the timers, at the expense of a major complication, and having to use a constant timeout. The constant timeout ensures that the list stays sorted.

那么哪个方法是最好的呢?

Method #2 is a simple no-brain-required solution that is adequate in most situations. Method #3 requires a bit more thinking, but handles many cases better, and isn't very complicated either. In most case, choosing either one is fine, with #3 being better in typical situations.

Method #1 is almost always a bad idea, and buys you nothing. Method #4 is rather complicated, but extremely efficient, something that really pays off after the first million or so of active timers, i.e. it's usually overkill :)

太早的特殊问题

If you ask a timer to call your callback after three seconds, then you expect it to be invoked after three seconds - but of course, this cannot be guaranteed to infinite precision. Less obviously, it cannot be guaranteed to any precision by libev - imagine somebody suspending the process with a STOP signal for a few hours for example.

So, libev tries to invoke your callback as soon as possible after the delay has occurred, but cannot guarantee this.

A less obvious failure mode is calling your callback too early: many event loops compare timestamps with a "elapsed delay >= requested delay", but this can cause your callback to be invoked much earlier than you would expect.

To see why, imagine a system with a clock that only offers full second resolution (think windows if you can't come up with a broken enough OS yourself). If you schedule a one-second timer at the time 500.9, then the event loop will schedule your timeout to elapse at a system time of 500 (500.9 truncated to the resolution) + 1, or 501.

If an event library looks at the timeout 0.1s later, it will see "501 >= 501" and invoke the callback 0.1s after it was started, even though a one-second delay was requested - this is being "too early", despite best intentions.

This is the reason why libev will never invoke the callback if the elapsed delay equals the requested delay, but only when the elapsed delay is larger than the requested delay. In the example above, libev would only invoke the callback at system time 502, or 1.1s after the timer was started.

So, while libev cannot guarantee that your callback will be invoked exactly when requested, it can and does guarantee that the requested delay has actually elapsed, or in other words, it always errs on the "too late" side of things.

时间更新的特殊问题

Establishing the current time is a costly operation (it usually takes at least one system call): EV therefore updates its idea of the current time only before and after ev_run collects new events, which causes a growing difference between ev_now () and ev_time () when handling lots of events in one iteration.

The relative timeouts are calculated relative to the ev_now ()
time. This is usually the right thing as this timestamp refers to the time of the event triggering whatever timeout you are modifying/starting. If you suspect event processing to be delayed and you need to base the timeout on the current time, use something like the following to adjust for it:

ev_timer_set (&timer, after + (ev_time () - ev_now ()), 0.);

If the event loop is suspended for a long time, you can also force an update of the time returned by ev_now ()
by calling ev_now_update ()
, although that will push the event time of all outstanding events further into the future.

非同步时钟的特殊问题

Modern systems have a variety of clocks - libev itself uses the normal "wall clock" clock and, if available, the monotonic clock (to avoid time jumps).

Neither of these clocks is synchronised with each other or any other clock on the system, so ev_time () might return a considerably different time than gettimeofday () or time (). On a GNU/Linux system, for example, a call to gettimeofday might return a second count that is one higher than a directly following call to time.

The moral of this is to only compare libev-related timestamps with ev_time () and ev_now (), at least if you want better precision than a second or so.

One more problem arises due to this lack of synchronisation: if libev uses the system monotonic clock and you compare timestamps from ev_time or ev_now from when you started your timer and when your callback is invoked, you will find that sometimes the callback is a bit "early".

This is because ev_timers work in real time, not wall clock time, so libev makes sure your callback is not invoked before the delay happened, measured according to the real time, not the system clock.

If your timeouts are based on a physical timescale (e.g. "time out this connection after 100 seconds") then this shouldn't bother you as it is exactly the right behaviour.

If you want to compare wall clock/system timestamps to your timers, then you need to use ev_periodics, as these are based on the wall clock time, where your comparisons will always generate correct results.

挂起的动画的特殊问题

When you leave the server world it is quite customary to hit machines that can suspend/hibernate - what happens to the clocks during such a suspend?

Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes all processes, while the clocks (times, CLOCK_MONOTONIC) continue to run until the system is suspended, but they will not advance while the system is suspended. That means, on resume, it will be as if the program was frozen for a few seconds, but the suspend time will not be counted towards ev_timer when a monotonic clock source is used. The real time clock advanced as expected, but if it is used as sole clocksource, then a long suspend would be detected as a time jump by libev, and timers would be adjusted accordingly.

I would not be surprised to see different behaviour in different between operating systems, OS versions or even different hardware.

The other form of suspend (job control, or sending a SIGSTOP) will see a time jump in the monotonic clocks and the realtime clock. If the program is suspended for a very long time, and monotonic clock sources are in use, then you can expect ev_timers to expire as the full suspension time will be counted towards the timers. When no monotonic clock source is in use, then libev will again assume a timejump and adjust accordingly.

It might be beneficial for this latter case to call ev_suspend and ev_resume in code that handles SIGTSTP, to at least get deterministic behaviour in this case (you can do nothing against SIGSTOP).

观察者特有函数和数据成员

ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat)

ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)

Configure the timer to trigger after after seconds. If repeat is 0., then it will automatically be stopped once the timeout is reached. If it is positive, then the timer will automatically be configured to trigger again repeat seconds later, again, and again, until stopped manually.

The timer itself will do a best-effort at avoiding drift, that is, if you configure a timer to trigger every 10 seconds, then it will normally trigger at exactly 10 second intervals. If, however, your program cannot keep up with the timer (because it takes longer than those 10 seconds to do stuff) the timer will not fire more than once per event loop iteration.

ev_timer_again (loop, ev_timer *)

This will act as if the timer timed out, and restarts it again if it is repeating. It basically works like calling ev_timer_stop, updating the timeout to the repeat value and calling ev_timer_start.

The exact semantics are as in the following rules, all of which will be applied to the watcher:

If the timer is pending, the pending status is always cleared.

If the timer is started but non-repeating, stop it (as if it timed out, without invoking it).

If the timer is repeating, make the repeat value the new timeout and start the timer, if necessary.

This sounds a bit complicated, see "Be smart about timeouts", above, for a usage example.

ev_tstamp ev_timer_remaining (loop, ev_timer *)

Returns the remaining time until a timer fires. If the timer is active, then this time is relative to the current event loop time, otherwise it's the timeout value currently configured.

That is, after an ev_timer_set (w, 5, 7), ev_timer_remaining returns 5. When the timer is started and one second passes, ev_timer_remaining will return 4. When the timer expires and is restarted, it will return roughly 7 (likely slightly less as callback invocation takes some time, too), and so on.

ev_tstamp repeat [read-write]

The current repeat value. Will be used each time the watcher times out or ev_timer_again is called, and determines the next timeout (if any), which is also when any modifications are taken into account.

示例

示例:创建一个在 60 秒后触发的定时器。

static void
one_minute_cb (struct ev_loop *loop, ev_timer *w, int revents)
{
  .. one minute over, w is actually stopped right here
}
 
ev_timer mytimer;
ev_timer_init (&mytimer, one_minute_cb, 60., 0.);
ev_timer_start (loop, &mytimer);

示例:创建一个在非活跃状态后 10 秒超时的超时定时器。

static void
timeout_cb (struct ev_loop *loop, ev_timer *w, int revents)
{
  .. ten seconds without any activity
}
 
ev_timer mytimer;
ev_timer_init (&mytimer, timeout_cb, 0., 10.); /* note, only repeat used */
ev_timer_again (&mytimer); /* start timer */
ev_run (loop, 0);
 
// and in some piece of code that gets executed on any "activity":
// reset the timeout to start ticking again at 10 seconds
ev_timer_again (&mytimer);
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,716评论 4 364
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,558评论 1 294
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 109,431评论 0 244
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,127评论 0 209
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,511评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,692评论 1 222
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,915评论 2 313
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,664评论 0 202
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,412评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,616评论 2 245
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,105评论 1 260
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,424评论 2 254
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,098评论 3 238
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,096评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,869评论 0 197
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,748评论 2 276
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,641评论 2 271

推荐阅读更多精彩内容

  • 名称 libev - 一个 C 编写的功能全面的高性能事件循环。 概要 示例程序 关于 libev Libev 是...
    hanpfei阅读 14,866评论 0 5
  • **2014真题Directions:Read the following text. Choose the be...
    又是夜半惊坐起阅读 8,592评论 0 23
  • 一位小提琴演奏家在台上演奏着著名的乐曲。所有观众盛装出席,除了坐在第一排的一位老先生。坐在旁边的一位姑娘奇怪...
    JohnnyWan阅读 176评论 0 0
  • 你喜欢个人还这么作我是真的没办法了。
    暗恋小结阅读 209评论 0 0
  • 文惜是从好友口中知道好风这个名字的。 起因是好友看不惯文惜那清心寡欲的模样,似乎随时就能看破红尘出家去了。于是一直...
    吾与吾阅读 355评论 4 2