深入内核讲明白Android Binder【三】
深入内核讲明白Android Binder【三】
- 前言
- 一、服务的获取过程内核源码解析
- 1. 客户端获取服务的用户态源码回顾
- 2. 客户端获取服务的内核源码分析
- 2.1 客户端向service_manager发送数据
- 1. binder_ioctl
- 2. binder_ioctl_write_read
- 3. binder_thread_write
- 4. binder_transaction
- 4.1 找到目的进程service_manager
- 4.2 拷贝客户端binder_transaction_data数据中的data.ptr.offsets到service_manager的mmap内核空间
- 4.3 拷贝客户端binder_transaction_data数据中的data.ptr.buffer到service_manager的mmap内核空间
- 4.4 把待处理的数据放到目的进程service_manager的binder_proc或binder_thread的todo链表
- 4.5 binder_proc_transaction将待处理的数据放到service_manager的todo链表,并唤醒service_manager
- 2.2. service_manager被唤醒
- 1. service_manager发送ioctl读取内核中的数据
- 2. binder_ioctl
- 3. binder_ioctl_write_read
- 4. binder_thread_read
- 5. 从service_manager内核空间读取到的数据组织形式
- 6. binder_parse解析客户端发送给service_manager的数据
- 7. svcmgr_handler处理客户端发送给service_manager的数据,获取客户端请求的服务handle
- 8. binder_send_reply将获取到的服务handle数据回复给驱动程序
- 2.3 binder驱动接收到service_manager解析完客户端发送的数据的数据
- 1. binder_ioctl
- 2. binder_ioctl_write_read
- 3. binder_thread_write
- 4. binder_transaction
- 4.1. 找到要回复的进程
- 4.2 处理flat_binder_object
- 4.3 根据handle找到服务的binder_node,
- 4.4 为客户端创建binder_ref,指向服务的binder_node
- 4.5 把数据放到客户端的todo链表,唤醒客户端
- 2.4 客户端被唤醒,获取客户端binder_ref对应的handle
- 3 服务注册和获取过程的简要总结图
- 二、服务的使用过程内核源码解析
- 1. 服务使用过程思路
- 2. 客户端使用服务内核源码解析
- 2.1 向服务端发送数据
- 1. sayhello_to
- 2. binder_call
- 3. binder_ioctl
- 4. binder_ioctl_write_read
- 5. binder_thread_write
- 6. binder_transaction
- 2.2 服务端被被唤醒,处理客户端发送的数据
- 2.3 客户端收到服务端处理的数据
- 三、后记
前言
深入内核讲明白Android Binder【一】实现了Binder跨进程通信的客户端和服务端的C语言Demo,并对服务端向service_manager注册服务,客户端向service_manager获取服务的源码进行了详细分析,但分析仅止步于用户态,深入内核讲明白Android Binder【二】,详细分析了服务注册过程Binder驱动内核源码,本篇文章继续分析服务获取过程及使用过程的Binder驱动内核源码。相信有了对上一篇文章的基础,这篇文章读起来应该轻松很多,那么我们就开始吧~
一、服务的获取过程内核源码解析
1. 客户端获取服务的用户态源码回顾
深入内核讲明白Android Binder【一】详细分析了客户端获取服务的用户态源码,这里简单回顾一下客户端通过svcmgr_lookup函数获取服务的用户态源码。
int main(int argc, char **argv)
{int fd;struct binder_state *bs;uint32_t svcmgr = BINDER_SERVICE_MANAGER;uint32_t handle;int ret;if (argc < 2){fprintf(stderr, "Usage:\n");fprintf(stderr, "%s <hello|goodbye>\n", argv[0]);fprintf(stderr, "%s <hello|goodbye> <name>\n", argv[0]);return -1;}//打开驱动bs = binder_open(128*1024);if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;}g_bs = bs;//向service_manager发送数据,获得hello服务句柄handle = svcmgr_lookup(bs, svcmgr, "hello");if (!handle) {fprintf(stderr, "failed to get hello service\n");return -1;}g_hello_handle = handle;fprintf(stderr, "Handle for hello service = %d\n", g_hello_handle);/* 向服务端发送数据 */if (!strcmp(argv[1], "hello")){if (argc == 2) {sayhello();} else if (argc == 3) {ret = sayhello_to(argv[2]);fprintf(stderr, "get ret of sayhello_to = %d\n", ret); }}binder_release(bs, handle);return 0;
}uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name)
{uint32_t handle;unsigned iodata[512/4];struct binder_io msg, reply;bio_init(&msg, iodata, sizeof(iodata), 4); // 为msg划分iodata的空间bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, SVC_MGR_NAME); // 写入android.os.IServiceManagerbio_put_string16_x(&msg, name); // 写入服务名 hello// target = 0,代表service_manager,SVC_MGR_CHECK_SERVICE代表需要调用service_manager的查找服务的函数if (binder_call(bs, &msg, &reply, target, SVC_MGR_CHECK_SERVICE))return 0;// 获取hello服务的句柄handle = bio_get_ref(&reply);if (handle)binder_acquire(bs, handle);binder_done(bs, &msg, &reply);return handle;
}int binder_call(struct binder_state *bs,struct binder_io *msg, struct binder_io *reply,uint32_t target, uint32_t code)
{int res;struct binder_write_read bwr;struct {uint32_t cmd;struct binder_transaction_data txn;} __attribute__((packed)) writebuf;unsigned readbuf[32];if (msg->flags & BIO_F_OVERFLOW) {fprintf(stderr,"binder: txn buffer overflow\n");goto fail;}// 构造binder_transaction_datawritebuf.cmd = BC_TRANSACTION;//ioclt类型writebuf.txn.target.handle = target;//数据发送给哪个进程writebuf.txn.code = code;//调用进程的哪个函数writebuf.txn.flags = 0;writebuf.txn.data_size = msg->data - msg->data0;//数据本身大小writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0);//数据头大小,指向binder_node实体(发送端提供服务函数的地址),bio_put_obj(&msg, ptr);writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0;//指向数据本身内存起点writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0;//指向数据头内存起点// 构造binder_write_readbwr.write_size = sizeof(writebuf);bwr.write_consumed = 0;bwr.write_buffer = (uintptr_t) &writebuf;hexdump(msg->data0, msg->data - msg->data0);for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//调用ioctl发送数据给驱动程序if (res < 0) {fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno));goto fail;}// 解析数据,将readbuf中的数据解析给replyres = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0);if (res == 0) return 0;if (res < 0) goto fail;}fail:memset(reply, 0, sizeof(*reply));reply->flags |= BIO_F_IOERROR;return -1;
}
可以看到svcmgr_lookup函数也是组织好binder_io数据,然后调用binder_call函数,把binder_io数据封装为binder_write_read数据,最后通过ioctl发送给service_manager。
这个过程和深入内核讲明白Android Binder【二】中分析的服务端向service_manager注册服务的过程类似,都是组织数据,然后通过ioctl把数据发给service_manager。
那么下面我们就进入linux内核源码,分析数据发给service_manager后,到底干了什么。
2. 客户端获取服务的内核源码分析
服务端调用ioctl,对应会调用到内核Binder驱动程序中的binder_ioctl函数,点击查看源码
2.1 客户端向service_manager发送数据
1. binder_ioctl
// 客户端调用ioctl发送数据给驱动程序
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);// 对应Binder内核驱动程序调用binder_ioctl函数处理数据
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{int ret;// 获取服务的binder_proc,它是在服务打开binder驱动的时候创建的,后面我们会分析struct binder_proc *proc = filp->private_data;struct binder_thread *thread;void __user *ubuf = (void __user *)arg;/*pr_info("binder_ioctl: %d:%d %x %lx\n",proc->pid, current->pid, cmd, arg);*/binder_selftest_alloc(&proc->alloc);trace_binder_ioctl(cmd, arg);ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);if (ret)goto err_unlocked;//为服务进程proc创建binder_threadthread = binder_get_thread(proc);if (thread == NULL) {ret = -ENOMEM;goto err;}// 从上面的分析可知此时cmd=BINDER_WRITE_READswitch (cmd) {case BINDER_WRITE_READ:// 处理服客户端数据ret = binder_ioctl_write_read(filp, arg, thread);if (ret)goto err;break;......
}
2. binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp, unsigned long arg,struct binder_thread *thread)
{int ret = 0;struct binder_proc *proc = filp->private_data;void __user *ubuf = (void __user *)arg; // 用户空间的数据// 从用户空间获取客户端发送的数据binder_write_readstruct binder_write_read bwr;//从用户空间发送的数据头拷贝到内核空间(这部分内核空间被mmap映射到了目标进程)if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto out;}binder_debug(BINDER_DEBUG_READ_WRITE,"%d:%d write %lld at %016llx, read %lld at %016llx\n",proc->pid, thread->pid,(u64)bwr.write_size, (u64)bwr.write_buffer,(u64)bwr.read_size, (u64)bwr.read_buffer);// 上面已经分析过客户端发送的数据保存在binder_write_read,此时它的write_size是大于0的if (bwr.write_size > 0) { // 向驱动程序写数据ret = binder_thread_write(proc, thread,bwr.write_buffer,bwr.write_size,&bwr.write_consumed);trace_binder_write_done(ret);if (ret < 0) {bwr.read_consumed = 0;if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto out;}}if (bwr.read_size > 0) { // 从驱动程序读数据ret = binder_thread_read(proc, thread, bwr.read_buffer,bwr.read_size,&bwr.read_consumed,filp->f_flags & O_NONBLOCK);trace_binder_read_done(ret);binder_inner_proc_lock(proc);if (!binder_worklist_empty_ilocked(&proc->todo))binder_wakeup_proc_ilocked(proc);binder_inner_proc_unlock(proc);if (ret < 0) {if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto out;}}binder_debug(BINDER_DEBUG_READ_WRITE,"%d:%d wrote %lld of %lld, read return %lld of %lld\n",proc->pid, thread->pid,(u64)bwr.write_consumed, (u64)bwr.write_size,(u64)bwr.read_consumed, (u64)bwr.read_size);// 复制数据给到用户空间if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {ret = -EFAULT;goto out;}
out:return ret;
}static inline int copy_from_user(void *to, const void __user volatile *from,unsigned long n)
{volatile_memcpy(to, from, n);return 0;
}static inline int copy_to_user(void __user volatile *to, const void *from,unsigned long n)
{volatile_memcpy(to, from, n);return 0;
}
3. binder_thread_write
此时cmd是BC_TRANSACTION
static int binder_thread_write(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed)
{uint32_t cmd;struct binder_context *context = proc->context;// 获取数据buffer,根据上面总结的发送数据可知,这个buffer由cmd和binder_transcation_data两部分数据组成void __user *buffer = (void __user *)(uintptr_t)binder_buffer;// 发送来的数据consumed=0,因此ptr指向用户空间数据buffer的起点void __user *ptr = buffer + *consumed;// 指向数据buffer的末尾void __user *end = buffer + size;// 逐个读取客户端发送来的数据(cmd+binder_transcation_data)while (ptr < end && thread->return_error.cmd == BR_OK) {int ret;// 获取用户空间中buffer的cmd值if (get_user(cmd, (uint32_t __user *)ptr))return -EFAULT;// 移动指针到cmd的位置之后,指向binder_transcation_data数据的内存起点ptr += sizeof(uint32_t);trace_binder_command(cmd);if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);}// 根据上面总结的发送数据可知,cmd是BC_TRANSACTIONswitch (cmd) {....../*BC_TRANSACTION:进程发送信息的cmdBR_TRANSACTION:进程接收BC_TRANSACTION发送信息的cmdBC_REPLY:进程回复信息的cmdBR_REPLY:进程接收BC_REPLY回复信息的cmd*/case BC_TRANSACTION:case BC_REPLY: {struct binder_transaction_data tr;// 从用户空间拷贝binder_transaction_data到内核空间if (copy_from_user(&tr, ptr, sizeof(tr)))return -EFAULT;// 移动指针到binder_transaction_data的位置之后,指向下一个cmd数据的内存起点ptr += sizeof(tr);// 处理binder_transaction_data数据binder_transaction(proc, thread, &tr,cmd == BC_REPLY, 0);break;}}}......
}int get_user(int *val, const int __user *ptr) {if (copy_from_user(val, ptr, sizeof(int))) {return -EFAULT; // 返回错误码}return 0; // 成功
}
4. binder_transaction
4.1 找到目的进程service_manager
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{......// 此时是客户端向内核发送数据,reply为falseif (reply) { // Binder内核驱动程序向用户空间回复数据的处理逻辑......} else { // 用户空间数据发送给内核空间的处理逻辑//1. 找到目的进程,本次分析的是向service_manager获取服务,因此目的进程就是tr->target.handle=0的service_managerif (tr->target.handle) { // tr->target.handle == 0 代表是service_manager进程,否则是其它进程.......} else { //处理service_manager进程mutex_lock(&context->context_mgr_node_lock);//这个node是在创建service_manager时通过BINDER_SET_CONTEXT_MGR的cmd创建的target_node = context->binder_context_mgr_node; if (target_node)target_node = binder_get_node_refs_for_txn(target_node, &target_proc,&return_error);elsereturn_error = BR_DEAD_REPLY;mutex_unlock(&context->context_mgr_node_lock);if (target_node && target_proc->pid == proc->pid) {binder_user_error("%d:%d got transaction to context manager from process owning it\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EINVAL;return_error_line = __LINE__;goto err_invalid_target_handle;}}......}}
4.2 拷贝客户端binder_transaction_data数据中的data.ptr.offsets到service_manager的mmap内核空间
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{int ret;struct binder_transaction *t;struct binder_work *w;struct binder_work *tcomplete;binder_size_t buffer_offset = 0;binder_size_t off_start_offset, off_end_offset;binder_size_t off_min;binder_size_t sg_buf_offset, sg_buf_end_offset;binder_size_t user_offset = 0;struct binder_proc *target_proc = NULL;struct binder_thread *target_thread = NULL;struct binder_node *target_node = NULL;struct binder_transaction *in_reply_to = NULL;struct binder_transaction_log_entry *e;uint32_t return_error = 0;uint32_t return_error_param = 0;uint32_t return_error_line = 0;binder_size_t last_fixup_obj_off = 0;binder_size_t last_fixup_min_off = 0;struct binder_context *context = proc->context;int t_debug_id = atomic_inc_return(&binder_last_id);ktime_t t_start_time = ktime_get();char *secctx = NULL;u32 secctx_sz = 0;struct list_head sgc_head;struct list_head pf_head;const void __user *user_buffer = (const void __user *)(uintptr_t)tr->data.ptr.buffer;INIT_LIST_HEAD(&sgc_head);INIT_LIST_HEAD(&pf_head);e = binder_transaction_log_add(&binder_transaction_log);e->debug_id = t_debug_id;e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);e->from_proc = proc->pid;e->from_thread = thread->pid;e->target_handle = tr->target.handle;e->data_size = tr->data_size;e->offsets_size = tr->offsets_size;strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);binder_inner_proc_lock(proc);binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);binder_inner_proc_unlock(proc);if (reply) {// 找到要回复的进程......} else {// 1. 找到要发送的目的进程if (tr->target.handle) { // 目的进程非service_manager进程.....} else { //目的进程是service_manager进程// 找到service_manager的binder_node节点.....}......}if (target_thread)e->to_thread = target_thread->pid;e->to_proc = target_proc->pid;/* TODO: reuse incoming transaction for reply */// 为binder_transcation分配内存t = kzalloc(sizeof(*t), GFP_KERNEL);.....if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;// 存储发送双方的基本信息t->from_pid = proc->pid;t->from_tid = thread->pid;t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc;t->to_thread = target_thread;t->code = tr->code;t->flags = tr->flags;t->priority = task_nice(current);......t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,tr->offsets_size, extra_buffers_size,!reply && (t->flags & TF_ONE_WAY));......t->buffer->debug_id = t->debug_id;t->buffer->transaction = t;t->buffer->target_node = target_node;t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);trace_binder_transaction_alloc_buf(t->buffer);// 把客户端的数据拷贝到目的进程service_manager mmap的内存空间,即t->buffer指向的内存空间if (binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer,ALIGN(tr->data_size, sizeof(void *)),(const void __user *)(uintptr_t)tr->data.ptr.offsets,tr->offsets_size)) {binder_user_error("%d:%d got transaction with invalid offsets ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}......
}/*** binder_alloc_copy_user_to_buffer() - copy src user to tgt user* @alloc: binder_alloc for this proc* @buffer: binder buffer to be accessed* @buffer_offset: offset into @buffer data* @from: userspace pointer to source buffer* @bytes: bytes to copy** Copy bytes from source userspace to target buffer.** Return: bytes remaining to be copied*/
unsigned long
binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,struct binder_buffer *buffer,binder_size_t buffer_offset,const void __user *from,size_t bytes)
{if (!check_buffer(alloc, buffer, buffer_offset, bytes))return bytes;while (bytes) {unsigned long size;unsigned long ret;struct page *page;pgoff_t pgoff;void *kptr;page = binder_alloc_get_page(alloc, buffer,buffer_offset, &pgoff);size = min_t(size_t, bytes, PAGE_SIZE - pgoff);kptr = kmap_local_page(page) + pgoff;// 拷贝服务端数据到service_manager mmap的内核内存空间ret = copy_from_user(kptr, from, size);kunmap_local(kptr);if (ret)return bytes - size + ret;bytes -= size;from += size;buffer_offset += size;}return 0;
}
4.3 拷贝客户端binder_transaction_data数据中的data.ptr.buffer到service_manager的mmap内核空间
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{int ret;struct binder_transaction *t;struct binder_work *w;struct binder_work *tcomplete;binder_size_t buffer_offset = 0;binder_size_t off_start_offset, off_end_offset;binder_size_t off_min;binder_size_t sg_buf_offset, sg_buf_end_offset;binder_size_t user_offset = 0;struct binder_proc *target_proc = NULL;struct binder_thread *target_thread = NULL;struct binder_node *target_node = NULL;struct binder_transaction *in_reply_to = NULL;struct binder_transaction_log_entry *e;uint32_t return_error = 0;uint32_t return_error_param = 0;uint32_t return_error_line = 0;binder_size_t last_fixup_obj_off = 0;binder_size_t last_fixup_min_off = 0;struct binder_context *context = proc->context;int t_debug_id = atomic_inc_return(&binder_last_id);ktime_t t_start_time = ktime_get();char *secctx = NULL;u32 secctx_sz = 0;struct list_head sgc_head;struct list_head pf_head;const void __user *user_buffer = (const void __user *)(uintptr_t)tr->data.ptr.buffer;INIT_LIST_HEAD(&sgc_head);INIT_LIST_HEAD(&pf_head);e = binder_transaction_log_add(&binder_transaction_log);e->debug_id = t_debug_id;e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);e->from_proc = proc->pid;e->from_thread = thread->pid;e->target_handle = tr->target.handle;e->data_size = tr->data_size;e->offsets_size = tr->offsets_size;strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);binder_inner_proc_lock(proc);binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);binder_inner_proc_unlock(proc);if (reply) {// 找到要回复的进程......} else {// 1. 找到要发送的目的进程if (tr->target.handle) { // 目的进程非service_manager进程.....} else { //目的进程是service_manager进程// 找到service_manager的binder_node节点.....}......}if (target_thread)e->to_thread = target_thread->pid;e->to_proc = target_proc->pid;/* TODO: reuse incoming transaction for reply */// 为binder_transcation分配内存t = kzalloc(sizeof(*t), GFP_KERNEL);.....if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;// 存储发送双方的基本信息t->from_pid = proc->pid;t->from_tid = thread->pid;t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc;t->to_thread = target_thread;t->code = tr->code;t->flags = tr->flags;t->priority = task_nice(current);......t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,tr->offsets_size, extra_buffers_size,!reply && (t->flags & TF_ONE_WAY));......t->buffer->debug_id = t->debug_id;t->buffer->transaction = t;t->buffer->target_node = target_node;t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);trace_binder_transaction_alloc_buf(t->buffer);// 把客户端的数据拷贝到目的进程service_manager mmap的内存空间,即t->buffer指向的内存空间if (binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer,ALIGN(tr->data_size, sizeof(void *)),(const void __user *)(uintptr_t)tr->data.ptr.offsets,tr->offsets_size)) {binder_user_error("%d:%d got transaction with invalid offsets ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}....../* Done processing objects, copy the rest of the buffer */if (binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer, user_offset,user_buffer + user_offset,tr->data_size - user_offset)) {binder_user_error("%d:%d got transaction with invalid data ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}......
}
4.4 把待处理的数据放到目的进程service_manager的binder_proc或binder_thread的todo链表
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{int ret;struct binder_transaction *t;struct binder_work *w;struct binder_work *tcomplete;binder_size_t buffer_offset = 0;binder_size_t off_start_offset, off_end_offset;binder_size_t off_min;binder_size_t sg_buf_offset, sg_buf_end_offset;binder_size_t user_offset = 0;struct binder_proc *target_proc = NULL;struct binder_thread *target_thread = NULL;struct binder_node *target_node = NULL;struct binder_transaction *in_reply_to = NULL;struct binder_transaction_log_entry *e;uint32_t return_error = 0;uint32_t return_error_param = 0;uint32_t return_error_line = 0;binder_size_t last_fixup_obj_off = 0;binder_size_t last_fixup_min_off = 0;struct binder_context *context = proc->context;int t_debug_id = atomic_inc_return(&binder_last_id);ktime_t t_start_time = ktime_get();char *secctx = NULL;u32 secctx_sz = 0;struct list_head sgc_head;struct list_head pf_head;const void __user *user_buffer = (const void __user *)(uintptr_t)tr->data.ptr.buffer;INIT_LIST_HEAD(&sgc_head);INIT_LIST_HEAD(&pf_head);e = binder_transaction_log_add(&binder_transaction_log);e->debug_id = t_debug_id;e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);e->from_proc = proc->pid;e->from_thread = thread->pid;e->target_handle = tr->target.handle;e->data_size = tr->data_size;e->offsets_size = tr->offsets_size;strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);binder_inner_proc_lock(proc);binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);binder_inner_proc_unlock(proc);if (reply) {// 找到要回复的进程......} else {// 1. 找到要发送的目的进程if (tr->target.handle) { // 目的进程非service_manager进程.....} else { //目的进程是service_manager进程// 找到service_manager的binder_node节点.....}......}if (target_thread)e->to_thread = target_thread->pid;e->to_proc = target_proc->pid;/* TODO: reuse incoming transaction for reply */// 为binder_transcation分配内存t = kzalloc(sizeof(*t), GFP_KERNEL);.....if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;// 存储发送双方的基本信息t->from_pid = proc->pid;t->from_tid = thread->pid;t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc;t->to_thread = target_thread;t->code = tr->code;t->flags = tr->flags;t->priority = task_nice(current);......t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,tr->offsets_size, extra_buffers_size,!reply && (t->flags & TF_ONE_WAY));......t->buffer->debug_id = t->debug_id;t->buffer->transaction = t;t->buffer->target_node = target_node;t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);trace_binder_transaction_alloc_buf(t->buffer);// 把客户端的数据拷贝到目的进程service_manager mmap的内存空间,即t->buffer指向的内存空间if (binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer,ALIGN(tr->data_size, sizeof(void *)),(const void __user *)(uintptr_t)tr->data.ptr.offsets,tr->offsets_size)) {binder_user_error("%d:%d got transaction with invalid offsets ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}....../* Done processing objects, copy the rest of the buffer */if (binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer, user_offset,user_buffer + user_offset,tr->data_size - user_offset)) {binder_user_error("%d:%d got transaction with invalid data ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}......t->work.type = BINDER_WORK_TRANSACTION;if (reply) {......} else if (!(t->flags & TF_ONE_WAY)) {BUG_ON(t->buffer->async_transaction != 0);binder_inner_proc_lock(proc);/** Defer the TRANSACTION_COMPLETE, so we don't return to* userspace immediately; this allows the target process to* immediately start processing this transaction, reducing* latency. We will then return the TRANSACTION_COMPLETE when* the target replies (or there is an error).*/binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);t->need_reply = 1;t->from_parent = thread->transaction_stack;//入栈thread->transaction_stack = t;binder_inner_proc_unlock(proc);//将数据放入目的进程的binder_proc或binder_thread的todo链表return_error = binder_proc_transaction(t,target_proc, target_thread);if (return_error) {binder_inner_proc_lock(proc);binder_pop_transaction_ilocked(thread, t);binder_inner_proc_unlock(proc);goto err_dead_proc_or_thread;}} else {......}
}
4.5 binder_proc_transaction将待处理的数据放到service_manager的todo链表,并唤醒service_manager
static int binder_proc_transaction(struct binder_transaction *t,struct binder_proc *proc,struct binder_thread *thread)
{struct binder_node *node = t->buffer->target_node;bool oneway = !!(t->flags & TF_ONE_WAY);bool pending_async = false;struct binder_transaction *t_outdated = NULL;bool frozen = false;BUG_ON(!node);binder_node_lock(node);if (oneway) {BUG_ON(thread);if (node->has_async_transaction)pending_async = true;elsenode->has_async_transaction = true;}binder_inner_proc_lock(proc);if (proc->is_frozen) {frozen = true;proc->sync_recv |= !oneway;proc->async_recv |= oneway;}if ((frozen && !oneway) || proc->is_dead ||(thread && thread->is_dead)) {binder_inner_proc_unlock(proc);binder_node_unlock(node);return frozen ? BR_FROZEN_REPLY : BR_DEAD_REPLY;}if (!thread && !pending_async)thread = binder_select_thread_ilocked(proc);if (thread) {binder_enqueue_thread_work_ilocked(thread, &t->work);//将数据放入目的进程的binder_thread} else if (!pending_async) {binder_enqueue_work_ilocked(&t->work, &proc->todo);//将数据放入目的进程的binder_proc} else {if ((t->flags & TF_UPDATE_TXN) && frozen) {t_outdated = binder_find_outdated_transaction_ilocked(t,&node->async_todo);if (t_outdated) {binder_debug(BINDER_DEBUG_TRANSACTION,"txn %d supersedes %d\n",t->debug_id, t_outdated->debug_id);list_del_init(&t_outdated->work.entry);proc->outstanding_txns--;}}binder_enqueue_work_ilocked(&t->work, &node->async_todo);}if (!pending_async)binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */);proc->outstanding_txns++;binder_inner_proc_unlock(proc);binder_node_unlock(node);/** To reduce potential contention, free the outdated transaction and* buffer after releasing the locks.*/if (t_outdated) {struct binder_buffer *buffer = t_outdated->buffer;t_outdated->buffer = NULL;buffer->transaction = NULL;trace_binder_transaction_update_buffer_release(buffer);binder_release_entire_buffer(proc, NULL, buffer, false);binder_alloc_free_buf(&proc->alloc, buffer);kfree(t_outdated);binder_stats_deleted(BINDER_STAT_TRANSACTION);}if (oneway && frozen)return BR_TRANSACTION_PENDING_FROZEN;return 0;
}static void
binder_enqueue_thread_work_ilocked(struct binder_thread *thread,struct binder_work *work)
{WARN_ON(!list_empty(&thread->waiting_thread_node));binder_enqueue_work_ilocked(work, &thread->todo); // 将待处理的数据放到thread的todo链表/* (e)poll-based threads require an explicit wakeup signal when* queuing their own work; they rely on these events to consume* messages without I/O block. Without it, threads risk waiting* indefinitely without handling the work.*/if (thread->looper & BINDER_LOOPER_STATE_POLL &&thread->pid == current->pid && !thread->process_todo)// 唤醒service_managerwake_up_interruptible_sync(&thread->wait);thread->process_todo = true;
}static void
binder_enqueue_work_ilocked(struct binder_work *work,struct list_head *target_list)
{BUG_ON(target_list == NULL);BUG_ON(work->entry.next && !list_empty(&work->entry));list_add_tail(&work->entry, target_list);
}
2.2. service_manager被唤醒
1. service_manager发送ioctl读取内核中的数据
int main(int argc, char **argv)
{struct binder_state *bs;bs = binder_open(128*1024);if (!bs) {ALOGE("failed to open binder driver\n");return -1;}if (binder_become_context_manager(bs)) {ALOGE("cannot become context manager (%s)\n", strerror(errno));return -1;}svcmgr_handle = BINDER_SERVICE_MANAGER;binder_loop(bs, svcmgr_handler);return 0;
}void binder_loop(struct binder_state *bs, binder_handler func)
{int res;struct binder_write_read bwr;uint32_t readbuf[32];bwr.write_size = 0;bwr.write_consumed = 0;bwr.write_buffer = 0;readbuf[0] = BC_ENTER_LOOPER;binder_write(bs, readbuf, sizeof(uint32_t));for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;// 发起读操作res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);if (res < 0) {ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));break;}res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);if (res == 0) {ALOGE("binder_loop: unexpected reply?!\n");break;}if (res < 0) {ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));break;}}
}
2. binder_ioctl
// res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);进入binder驱动程序
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{int ret;struct binder_proc *proc = filp->private_data;struct binder_thread *thread;void __user *ubuf = (void __user *)arg;/*pr_info("binder_ioctl: %d:%d %x %lx\n",proc->pid, current->pid, cmd, arg);*/binder_selftest_alloc(&proc->alloc);trace_binder_ioctl(cmd, arg);ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);if (ret)goto err_unlocked;//为进程proc创建binder_threadthread = binder_get_thread(proc);if (thread == NULL) {ret = -ENOMEM;goto err;}switch (cmd) {case BINDER_WRITE_READ:ret = binder_ioctl_write_read(filp, arg, thread);if (ret)goto err;break;......}......
}
3. binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp, unsigned long arg,struct binder_thread *thread)
{int ret = 0;struct binder_proc *proc = filp->private_data;void __user *ubuf = (void __user *)arg;struct binder_write_read bwr;//从用户空间拷贝数据到内核空间(这部分内核空间被mmap映射到了目标进程)if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto out;}binder_debug(BINDER_DEBUG_READ_WRITE,"%d:%d write %lld at %016llx, read %lld at %016llx\n",proc->pid, thread->pid,(u64)bwr.write_size, (u64)bwr.write_buffer,(u64)bwr.read_size, (u64)bwr.read_buffer);if (bwr.write_size > 0) {......}if (bwr.read_size > 0) {ret = binder_thread_read(proc, thread, bwr.read_buffer,bwr.read_size,&bwr.read_consumed,filp->f_flags & O_NONBLOCK);trace_binder_read_done(ret);binder_inner_proc_lock(proc);if (!binder_worklist_empty_ilocked(&proc->todo))binder_wakeup_proc_ilocked(proc);binder_inner_proc_unlock(proc);if (ret < 0) {if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto out;}}binder_debug(BINDER_DEBUG_READ_WRITE,"%d:%d wrote %lld of %lld, read return %lld of %lld\n",proc->pid, thread->pid,(u64)bwr.write_consumed, (u64)bwr.write_size,(u64)bwr.read_consumed, (u64)bwr.read_size);if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {ret = -EFAULT;goto out;}
out:return ret;
}
4. binder_thread_read
读取service_manager内核空间的数据,写入service_manager用户空间
static int binder_thread_read(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed, int non_block)
{void __user *buffer = (void __user *)(uintptr_t)binder_buffer;void __user *ptr = buffer + *consumed;void __user *end = buffer + size;int ret = 0;int wait_for_proc_work;if (*consumed == 0) {if (put_user(BR_NOOP, (uint32_t __user *)ptr))//对于所有的读操作,数据头部都是BR_NOOPreturn -EFAULT;ptr += sizeof(uint32_t);}retry:binder_inner_proc_lock(proc);wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);binder_inner_proc_unlock(proc);thread->looper |= BINDER_LOOPER_STATE_WAITING;trace_binder_wait_for_work(wait_for_proc_work,!!thread->transaction_stack,!binder_worklist_empty(proc, &thread->todo));if (wait_for_proc_work) {if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |BINDER_LOOPER_STATE_ENTERED))) {binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n",proc->pid, thread->pid, thread->looper);wait_event_interruptible(binder_user_error_wait,binder_stop_on_user_error < 2);}binder_set_nice(proc->default_priority);}//没有数据就休眠if (non_block) {if (!binder_has_work(thread, wait_for_proc_work))ret = -EAGAIN;} else {ret = binder_wait_for_work(thread, wait_for_proc_work);}thread->looper &= ~BINDER_LOOPER_STATE_WAITING;if (ret)return ret;while (1) {uint32_t cmd;struct binder_transaction_data_secctx tr;struct binder_transaction_data *trd = &tr.transaction_data;struct binder_work *w = NULL;struct list_head *list = NULL;struct binder_transaction *t = NULL;struct binder_thread *t_from;size_t trsize = sizeof(*trd);binder_inner_proc_lock(proc);//如果proc的thread->todo链表有数据,拿到链表数据if (!binder_worklist_empty_ilocked(&thread->todo))list = &thread->todo;//如果proc->todo链表有数据,拿到链表数据else if (!binder_worklist_empty_ilocked(&proc->todo) &&wait_for_proc_work)list = &proc->todo;else {binder_inner_proc_unlock(proc);/* no data added */if (ptr - buffer == 4 && !thread->looper_need_return)goto retry;break;}if (end - ptr < sizeof(tr) + 4) {binder_inner_proc_unlock(proc);break;}w = binder_dequeue_work_head_ilocked(list);if (binder_worklist_empty_ilocked(&thread->todo))thread->process_todo = false;//逐个处理相关类型的数据,server唤醒service_manager,将数据添加到链表时,binder_work.type是BINDER_WORK_TRANSACTIONswitch (w->type) {case BINDER_WORK_TRANSACTION: {binder_inner_proc_unlock(proc);t= container_of(w, struct binder_transaction, work);//构造出发送方发来的binder_transaction} break;......}if (!t)continue;BUG_ON(t->buffer == NULL);if (t->buffer->target_node) {struct binder_node *target_node = t->buffer->target_node;trd->target.ptr = target_node->ptr;trd->cookie = target_node->cookie;t->saved_priority = task_nice(current);if (t->priority < target_node->min_priority &&!(t->flags & TF_ONE_WAY))binder_set_nice(t->priority);else if (!(t->flags & TF_ONE_WAY) ||t->saved_priority > target_node->min_priority)binder_set_nice(target_node->min_priority);//从server发送数据给service_manager,cmd是BC_TRANSACTION//从service_manager返回数据给server,将cmd设为BR_TRANSACTION,cmd = BR_TRANSACTION;} else {trd->target.ptr = 0;trd->cookie = 0;cmd = BR_REPLY;}trd->code = t->code;trd->flags = t->flags;trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);......trd->data_size = t->buffer->data_size;trd->offsets_size = t->buffer->offsets_size;trd->data.ptr.buffer = t->buffer->user_data;trd->data.ptr.offsets = trd->data.ptr.buffer +ALIGN(t->buffer->data_size,sizeof(void *));tr.secctx = t->security_ctx;if (t->security_ctx) {cmd = BR_TRANSACTION_SEC_CTX;trsize = sizeof(tr);}// 把cmd写入service_manager的用户空间if (put_user(cmd, (uint32_t __user *)ptr)) {if (t_from)binder_thread_dec_tmpref(t_from);binder_cleanup_transaction(t, "put_user failed",BR_FAILED_REPLY);return -EFAULT;}ptr += sizeof(uint32_t);// 把tr写入service_manager的用户空间,tr.transaction_data中包括了客户端发送来的数据if (copy_to_user(ptr, &tr, trsize)) {if (t_from)binder_thread_dec_tmpref(t_from);binder_cleanup_transaction(t, "copy_to_user failed",BR_FAILED_REPLY);return -EFAULT;}ptr += trsize;......done:......return 0;
}
5. 从service_manager内核空间读取到的数据组织形式
6. binder_parse解析客户端发送给service_manager的数据
此时cmd是BR_TRANSACTION
void binder_loop(struct binder_state *bs, binder_handler func)
{int res;struct binder_write_read bwr;uint32_t readbuf[32];bwr.write_size = 0;bwr.write_consumed = 0;bwr.write_buffer = 0;readbuf[0] = BC_ENTER_LOOPER;binder_write(bs, readbuf, sizeof(uint32_t));for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//读到数据if (res < 0) {ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));break;}//解析读到的数据res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);if (res == 0) {ALOGE("binder_loop: unexpected reply?!\n");break;}if (res < 0) {ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));break;}}
}int binder_parse(struct binder_state *bs, struct binder_io *bio,uintptr_t ptr, size_t size, binder_handler func)
{int r = 1;uintptr_t end = ptr + (uintptr_t) size;while (ptr < end) {uint32_t cmd = *(uint32_t *) ptr;ptr += sizeof(uint32_t);
#if TRACEfprintf(stderr,"%s:\n", cmd_name(cmd));
#endifswitch(cmd) {case BR_NOOP:break;......//收到数据的处理情况,(收到的数据中有服务名称,服务的handle)case BR_TRANSACTION: {struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: txn too small!\n");return -1;}binder_dump_txn(txn);if (func) {unsigned rdata[256/4];struct binder_io msg;struct binder_io reply;int res;//构造binder_iobio_init(&reply, rdata, sizeof(rdata), 4);bio_init_from_txn(&msg, txn);//处理binde_iores = func(bs, txn, &msg, &reply); // func = svcmgr_handler,用于添加/获取服务//将处理完的数据,发送给serverbinder_send_reply(bs, &reply, txn->data.ptr.buffer, res);}ptr += sizeof(*txn);break;}......default:ALOGE("parse: OOPS %d\n", cmd);return -1;}}return r;
}
7. svcmgr_handler处理客户端发送给service_manager的数据,获取客户端请求的服务handle
int svcmgr_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{struct svcinfo *si;uint16_t *s;size_t len;uint32_t handle;uint32_t strict_policy;int allow_isolated;//ALOGI("target=%x code=%d pid=%d uid=%d\n",// txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid);if (txn->target.handle != svcmgr_handle)return -1;if (txn->code == PING_TRANSACTION)return 0;// Equivalent to Parcel::enforceInterface(), reading the RPC// header with the strict mode policy mask and the interface name.// Note that we ignore the strict_policy and don't propagate it// further (since we do no outbound RPCs anyway).strict_policy = bio_get_uint32(msg);s = bio_get_string16(msg, &len); //传入的是android.os.IServiceManagerif (s == NULL) {return -1;}if ((len != (sizeof(svcmgr_id) / 2)) ||memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {//传入的必须是android.os.IServiceManagerfprintf(stderr,"invalid id %s\n", str8(s, len));return -1;}switch(txn->code) {case SVC_MGR_GET_SERVICE:case SVC_MGR_CHECK_SERVICE:s = bio_get_string16(msg, &len); // 获取客户端要获取的服务的名字"hello"if (s == NULL) {return -1;}// 在service_manager的服务列表中寻找服务名为hello的服务的handlehandle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);if (!handle)break;// 将服务handle写入replybio_put_ref(reply, handle);return 0;......bio_put_uint32(reply, 0);//处理完后,最后要构造一个reply,并放入0return 0;
}uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{struct svcinfo *si;if (!svc_can_find(s, len, spid)) {ALOGE("find_service('%s') uid=%d - PERMISSION DENIED\n",str8(s, len), uid);return 0;}si = find_svc(s, len);//ALOGI("check_service('%s') handle = %x\n", str8(s, len), si ? si->handle : 0);if (si && si->handle) {if (!si->allow_isolated) {// If this service doesn't allow access from isolated processes,// then check the uid to see if it is isolated.uid_t appid = uid % AID_USER;if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {return 0;}}return si->handle;} else {return 0;}
}struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{struct svcinfo *si;for (si = svclist; si; si = si->next) {if ((len == si->len) &&!memcmp(s16, si->name, len * sizeof(uint16_t))) {return si;}}return NULL;
}void bio_put_ref(struct binder_io *bio, uint32_t handle)
{struct flat_binder_object *obj;if (handle)obj = bio_alloc_obj(bio);elseobj = bio_alloc(bio, sizeof(*obj));if (!obj)return;obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;obj->type = BINDER_TYPE_HANDLE;obj->handle = handle;obj->cookie = 0;
}
8. binder_send_reply将获取到的服务handle数据回复给驱动程序
int binder_parse(struct binder_state *bs, struct binder_io *bio,uintptr_t ptr, size_t size, binder_handler func)
{int r = 1;uintptr_t end = ptr + (uintptr_t) size;while (ptr < end) {uint32_t cmd = *(uint32_t *) ptr;ptr += sizeof(uint32_t);
#if TRACEfprintf(stderr,"%s:\n", cmd_name(cmd));
#endifswitch(cmd) {case BR_NOOP:break;......//收到数据的处理情况,(收到的数据中有服务名称,服务的handle)case BR_TRANSACTION: {struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: txn too small!\n");return -1;}binder_dump_txn(txn);if (func) {unsigned rdata[256/4];struct binder_io msg;struct binder_io reply;int res;//构造binder_iobio_init(&reply, rdata, sizeof(rdata), 4);bio_init_from_txn(&msg, txn);//处理binde_iores = func(bs, txn, &msg, &reply); // func = svcmgr_handler,用于添加/获取服务//将处理完的数据,发送给serverbinder_send_reply(bs, &reply, txn->data.ptr.buffer, res);}ptr += sizeof(*txn);break;}......default:ALOGE("parse: OOPS %d\n", cmd);return -1;}}return r;
}void binder_send_reply(struct binder_state *bs,struct binder_io *reply,binder_uintptr_t buffer_to_free,int status)
{struct {uint32_t cmd_free;binder_uintptr_t buffer;uint32_t cmd_reply;struct binder_transaction_data txn;} __attribute__((packed)) data;data.cmd_free = BC_FREE_BUFFER;//server拷贝到service_manager映射的内核态缓冲区的数据,用完后,就可以释放了data.buffer = buffer_to_free;data.cmd_reply = BC_REPLY; // service_manager处理完数据后,将结果回复回去,cmd = BC_REPLYdata.txn.target.ptr = 0;data.txn.cookie = 0;data.txn.code = 0;if (status) {data.txn.flags = TF_STATUS_CODE;data.txn.data_size = sizeof(int);data.txn.offsets_size = 0;data.txn.data.ptr.buffer = (uintptr_t)&status;data.txn.data.ptr.offsets = 0;} else {data.txn.flags = 0;data.txn.data_size = reply->data - reply->data0;data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);data.txn.data.ptr.buffer = (uintptr_t)reply->data0;data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;}binder_write(bs, &data, sizeof(data));
}int binder_write(struct binder_state *bs, void *data, size_t len)
{struct binder_write_read bwr;int res;bwr.write_size = len;bwr.write_consumed = 0;bwr.write_buffer = (uintptr_t) data;bwr.read_size = 0;bwr.read_consumed = 0;bwr.read_buffer = 0;res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);if (res < 0) {fprintf(stderr,"binder_write: ioctl failed (%s)\n",strerror(errno));}return res;
}
2.3 binder驱动接收到service_manager解析完客户端发送的数据的数据
1. binder_ioctl
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{int ret;struct binder_proc *proc = filp->private_data;struct binder_thread *thread;void __user *ubuf = (void __user *)arg;/*pr_info("binder_ioctl: %d:%d %x %lx\n",proc->pid, current->pid, cmd, arg);*/binder_selftest_alloc(&proc->alloc);trace_binder_ioctl(cmd, arg);ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);if (ret)goto err_unlocked;//为进程proc创建binder_threadthread = binder_get_thread(proc);if (thread == NULL) {ret = -ENOMEM;goto err;}switch (cmd) {case BINDER_WRITE_READ:ret = binder_ioctl_write_read(filp, arg, thread);if (ret)goto err;break;......}......
}
2. binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp, unsigned long arg,struct binder_thread *thread)
{int ret = 0;struct binder_proc *proc = filp->private_data;void __user *ubuf = (void __user *)arg;struct binder_write_read bwr;//从用户空间拷贝数据到内核空间(这部分内核空间被mmap映射到了目标进程)if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto out;}binder_debug(BINDER_DEBUG_READ_WRITE,"%d:%d write %lld at %016llx, read %lld at %016llx\n",proc->pid, thread->pid,(u64)bwr.write_size, (u64)bwr.write_buffer,(u64)bwr.read_size, (u64)bwr.read_buffer);if (bwr.write_size > 0) {ret = binder_thread_write(proc, thread,bwr.write_buffer,bwr.write_size,&bwr.write_consumed);trace_binder_write_done(ret);if (ret < 0) {bwr.read_consumed = 0;if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto out;}}if (bwr.read_size > 0) {......}......
out:return ret;
}
3. binder_thread_write
此时cmd是BC_REPLY
static int binder_thread_write(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed)
{uint32_t cmd;struct binder_context *context = proc->context;// 获取数据buffer,根据上面总结的发送数据可知,这个buffer由cmd和binder_transcation_data两部分数据组成void __user *buffer = (void __user *)(uintptr_t)binder_buffer;// 发送来的数据consumed=0,因此ptr指向用户空间数据buffer的起点void __user *ptr = buffer + *consumed;// 指向数据buffer的末尾void __user *end = buffer + size;// 逐个读取客户端发送来的数据(cmd+binder_transcation_data)while (ptr < end && thread->return_error.cmd == BR_OK) {int ret;// 获取用户空间中buffer的cmd值if (get_user(cmd, (uint32_t __user *)ptr))return -EFAULT;// 移动指针到cmd的位置之后,指向binder_transcation_data数据的内存起点ptr += sizeof(uint32_t);trace_binder_command(cmd);if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);}// 根据上面总结的发送数据可知,cmd是BC_TRANSACTIONswitch (cmd) {....../*BC_TRANSACTION:进程发送信息的cmdBR_TRANSACTION:进程接收BC_TRANSACTION发送信息的cmdBC_REPLY:进程回复信息的cmdBR_REPLY:进程接收BC_REPLY回复信息的cmd*/case BC_TRANSACTION:case BC_REPLY: {struct binder_transaction_data tr;// 从用户空间拷贝binder_transaction_data到内核空间if (copy_from_user(&tr, ptr, sizeof(tr)))return -EFAULT;// 移动指针到binder_transaction_data的位置之后,指向下一个cmd数据的内存起点ptr += sizeof(tr);// 处理binder_transaction_data数据binder_transaction(proc, thread, &tr,cmd == BC_REPLY, 0);break;}}}......
}int get_user(int *val, const int __user *ptr) {if (copy_from_user(val, ptr, sizeof(int))) {return -EFAULT; // 返回错误码}return 0; // 成功
}
4. binder_transaction
4.1. 找到要回复的进程
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{......if (reply) {// 找到要回复的进程binder_inner_proc_lock(proc);in_reply_to = thread->transaction_stack;//从栈中取出binder_transaction,获得要回复给谁if (in_reply_to == NULL) {binder_inner_proc_unlock(proc);binder_user_error("%d:%d got reply transaction with no transaction stack\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;goto err_empty_call_stack;}if (in_reply_to->to_thread != thread) {spin_lock(&in_reply_to->lock);binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",proc->pid, thread->pid, in_reply_to->debug_id,in_reply_to->to_proc ?in_reply_to->to_proc->pid : 0,in_reply_to->to_thread ?in_reply_to->to_thread->pid : 0);spin_unlock(&in_reply_to->lock);binder_inner_proc_unlock(proc);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;in_reply_to = NULL;goto err_bad_call_stack;}thread->transaction_stack = in_reply_to->to_parent;//出栈binder_inner_proc_unlock(proc);binder_set_nice(in_reply_to->saved_priority);target_thread = binder_get_txn_from_and_acq_inner(in_reply_to);if (target_thread == NULL) {/* annotation for sparse */__release(&target_thread->proc->inner_lock);binder_txn_error("%d:%d reply target not found\n",thread->pid, proc->pid);return_error = BR_DEAD_REPLY;return_error_line = __LINE__;goto err_dead_binder;}if (target_thread->transaction_stack != in_reply_to) {binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",proc->pid, thread->pid,target_thread->transaction_stack ?target_thread->transaction_stack->debug_id : 0,in_reply_to->debug_id);binder_inner_proc_unlock(target_thread->proc);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;in_reply_to = NULL;target_thread = NULL;goto err_dead_binder;}// 找到要回复的进程target_proc = target_thread->proc;target_proc->tmp_ref++;binder_inner_proc_unlock(target_thread->proc);} else {// 1. 找到要发送的目的进程......}
......
}
4.2 处理flat_binder_object
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{......if (reply) {// 找到要回复的进程binder_inner_proc_lock(proc);in_reply_to = thread->transaction_stack;//从栈中取出binder_transaction,获得要回复给谁if (in_reply_to == NULL) {binder_inner_proc_unlock(proc);binder_user_error("%d:%d got reply transaction with no transaction stack\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;goto err_empty_call_stack;}if (in_reply_to->to_thread != thread) {spin_lock(&in_reply_to->lock);binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",proc->pid, thread->pid, in_reply_to->debug_id,in_reply_to->to_proc ?in_reply_to->to_proc->pid : 0,in_reply_to->to_thread ?in_reply_to->to_thread->pid : 0);spin_unlock(&in_reply_to->lock);binder_inner_proc_unlock(proc);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;in_reply_to = NULL;goto err_bad_call_stack;}thread->transaction_stack = in_reply_to->to_parent;//出栈binder_inner_proc_unlock(proc);binder_set_nice(in_reply_to->saved_priority);target_thread = binder_get_txn_from_and_acq_inner(in_reply_to);if (target_thread == NULL) {/* annotation for sparse */__release(&target_thread->proc->inner_lock);binder_txn_error("%d:%d reply target not found\n",thread->pid, proc->pid);return_error = BR_DEAD_REPLY;return_error_line = __LINE__;goto err_dead_binder;}if (target_thread->transaction_stack != in_reply_to) {binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",proc->pid, thread->pid,target_thread->transaction_stack ?target_thread->transaction_stack->debug_id : 0,in_reply_to->debug_id);binder_inner_proc_unlock(target_thread->proc);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;in_reply_to = NULL;target_thread = NULL;goto err_dead_binder;}// 找到要回复的进程target_proc = target_thread->proc;target_proc->tmp_ref++;binder_inner_proc_unlock(target_thread->proc);} else {// 1. 找到要发送的目的进程......}if (target_thread)e->to_thread = target_thread->pid;e->to_proc = target_proc->pid;/* TODO: reuse incoming transaction for reply */// 为binder_transcation分配内存t = kzalloc(sizeof(*t), GFP_KERNEL);.....if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;// 存储发送双方的基本信息t->from_pid = proc->pid;t->from_tid = thread->pid;t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc;t->to_thread = target_thread;t->code = tr->code;t->flags = tr->flags;t->priority = task_nice(current);......t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,tr->offsets_size, extra_buffers_size,!reply && (t->flags & TF_ONE_WAY));......t->buffer->debug_id = t->debug_id;t->buffer->transaction = t;t->buffer->target_node = target_node;t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);trace_binder_transaction_alloc_buf(t->buffer);// 把客户端的数据拷贝到目的进程test_client mmap的内存空间if (binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer,ALIGN(tr->data_size, sizeof(void *)),(const void __user *)(uintptr_t)tr->data.ptr.offsets,tr->offsets_size)) {binder_user_error("%d:%d got transaction with invalid offsets ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}......//处理server传入的binder_io.offs数据,这个数据指向用于构建binder_node实体的 flat_binder_objectfor (buffer_offset = off_start_offset; buffer_offset < off_end_offset;buffer_offset += sizeof(binder_size_t)) {struct binder_object_header *hdr;size_t object_size;struct binder_object object;binder_size_t object_offset;binder_size_t copy_size;if (binder_alloc_copy_from_buffer(&target_proc->alloc,&object_offset,t->buffer,buffer_offset,sizeof(object_offset))) {binder_txn_error("%d:%d copy offset from buffer failed\n",thread->pid, proc->pid);return_error = BR_FAILED_REPLY;return_error_param = -EINVAL;return_error_line = __LINE__;goto err_bad_offset;}/** Copy the source user buffer up to the next object* that will be processed.*/copy_size = object_offset - user_offset;if (copy_size && (user_offset > object_offset ||binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer, user_offset,user_buffer + user_offset,copy_size))) {binder_user_error("%d:%d got transaction with invalid data ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}// 将指向flat_binder_object的指针拷贝给objectobject_size = binder_get_object(target_proc, user_buffer,t->buffer, object_offset, &object);if (object_size == 0 || object_offset < off_min) {binder_user_error("%d:%d got transaction with invalid offset (%lld, min %lld max %lld) or object.\n",proc->pid, thread->pid,(u64)object_offset,(u64)off_min,(u64)t->buffer->data_size);return_error = BR_FAILED_REPLY;return_error_param = -EINVAL;return_error_line = __LINE__;goto err_bad_offset;}/** Set offset to the next buffer fragment to be* copied*/user_offset = object_offset + object_size;hdr = &object.hdr;off_min = object_offset + object_size;// 此处 binder类型的是BINDER_TYPE_HANDLE,通过handle在service_manager的binder-ref中找到hello服务的binder_nodeswitch (hdr->type) {//处理binder实体case BINDER_TYPE_BINDER:case BINDER_TYPE_WEAK_BINDER: {......} break;//处理binder引用case BINDER_TYPE_HANDLE:case BINDER_TYPE_WEAK_HANDLE: {struct flat_binder_object *fp;fp = to_flat_binder_object(hdr);ret = binder_translate_handle(fp, t, thread);if (ret < 0 ||binder_alloc_copy_to_buffer(&target_proc->alloc,t->buffer,object_offset,fp, sizeof(*fp))) {binder_txn_error("%d:%d translate handle failed\n",thread->pid, proc->pid);return_error = BR_FAILED_REPLY;return_error_param = ret;return_error_line = __LINE__;goto err_translate_failed;}} break;......}
......
}
4.3 根据handle找到服务的binder_node,
static int binder_translate_handle(struct flat_binder_object *fp,struct binder_transaction *t,struct binder_thread *thread)
{struct binder_proc *proc = thread->proc;struct binder_proc *target_proc = t->to_proc;struct binder_node *node;struct binder_ref_data src_rdata;int ret = 0;// 根据handle从service_manager中找到服务的binder_nodenode = binder_get_node_from_ref(proc, fp->handle,fp->hdr.type == BINDER_TYPE_HANDLE, &src_rdata);if (!node) {binder_user_error("%d:%d got transaction with invalid handle, %d\n",proc->pid, thread->pid, fp->handle);return -EINVAL;}if (security_binder_transfer_binder(proc->cred, target_proc->cred)) {ret = -EPERM;goto done;}binder_node_lock(node);if (node->proc == target_proc) {if (fp->hdr.type == BINDER_TYPE_HANDLE)fp->hdr.type = BINDER_TYPE_BINDER;elsefp->hdr.type = BINDER_TYPE_WEAK_BINDER;fp->binder = node->ptr;fp->cookie = node->cookie;if (node->proc)binder_inner_proc_lock(node->proc);else__acquire(&node->proc->inner_lock);binder_inc_node_nilocked(node,fp->hdr.type == BINDER_TYPE_BINDER,0, NULL);if (node->proc)binder_inner_proc_unlock(node->proc);else__release(&node->proc->inner_lock);trace_binder_transaction_ref_to_node(t, node, &src_rdata);binder_debug(BINDER_DEBUG_TRANSACTION," ref %d desc %d -> node %d u%016llx\n",src_rdata.debug_id, src_rdata.desc, node->debug_id,(u64)node->ptr);binder_node_unlock(node);} else {struct binder_ref_data dest_rdata;binder_node_unlock(node);// 为客户端创建binder_ref指向服务的binder_noderet = binder_inc_ref_for_node(target_proc, node,fp->hdr.type == BINDER_TYPE_HANDLE,NULL, &dest_rdata);if (ret)goto done;fp->binder = 0;fp->handle = dest_rdata.desc;fp->cookie = 0;trace_binder_transaction_ref_to_ref(t, node, &src_rdata,&dest_rdata);binder_debug(BINDER_DEBUG_TRANSACTION," ref %d desc %d -> ref %d desc %d (node %d)\n",src_rdata.debug_id, src_rdata.desc,dest_rdata.debug_id, dest_rdata.desc,node->debug_id);}
done:binder_put_node(node);return ret;
}static struct binder_node *binder_get_node_from_ref(struct binder_proc *proc,u32 desc, bool need_strong_ref,struct binder_ref_data *rdata)
{struct binder_node *node;struct binder_ref *ref;binder_proc_lock(proc);// 根据handle找到binder_refref = binder_get_ref_olocked(proc, desc, need_strong_ref);if (!ref)goto err_no_ref;// 根据binder_ref找到binder_nodenode = ref->node;/** Take an implicit reference on the node to ensure* it stays alive until the call to binder_put_node()*/binder_inc_node_tmpref(node);if (rdata)*rdata = ref->data;binder_proc_unlock(proc);return node;err_no_ref:binder_proc_unlock(proc);return NULL;
}static struct binder_ref *binder_get_ref_olocked(struct binder_proc *proc,u32 desc, bool need_strong_ref)
{struct rb_node *n = proc->refs_by_desc.rb_node;struct binder_ref *ref;while (n) {ref = rb_entry(n, struct binder_ref, rb_node_desc);if (desc < ref->data.desc) {n = n->rb_left;} else if (desc > ref->data.desc) {n = n->rb_right;} else if (need_strong_ref && !ref->data.strong) {binder_user_error("tried to use weak ref as strong ref\n");return NULL;} else {return ref;}}return NULL;
}
4.4 为客户端创建binder_ref,指向服务的binder_node
static int binder_inc_ref_for_node(struct binder_proc *proc,struct binder_node *node,bool strong,struct list_head *target_list,struct binder_ref_data *rdata)
{struct binder_ref *ref;struct binder_ref *new_ref = NULL;int ret = 0;binder_proc_lock(proc);// 先查找客户端是否已经有对应的binder_ref,若没有则新建binder_refref = binder_get_ref_for_node_olocked(proc, node, NULL);if (!ref) {binder_proc_unlock(proc);new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);if (!new_ref)return -ENOMEM;binder_proc_lock(proc);ref = binder_get_ref_for_node_olocked(proc, node, new_ref);}//增加引用计数ret = binder_inc_ref_olocked(ref, strong, target_list);*rdata = ref->data;if (ret && ref == new_ref) {/** Cleanup the failed reference here as the target* could now be dead and have already released its* references by now. Calling on the new reference* with strong=0 and a tmp_refs will not decrement* the node. The new_ref gets kfree'd below.*/binder_cleanup_ref_olocked(new_ref);ref = NULL;}binder_proc_unlock(proc);if (new_ref && ref != new_ref)/** Another thread created the ref first so* free the one we allocated*/kfree(new_ref);return ret;
}static struct binder_ref *binder_get_ref_for_node_olocked(struct binder_proc *proc,struct binder_node *node,struct binder_ref *new_ref)
{struct binder_context *context = proc->context;struct rb_node **p = &proc->refs_by_node.rb_node;struct rb_node *parent = NULL;struct binder_ref *ref;struct rb_node *n;while (*p) {parent = *p;ref = rb_entry(parent, struct binder_ref, rb_node_node);if (node < ref->node)p = &(*p)->rb_left;else if (node > ref->node)p = &(*p)->rb_right;elsereturn ref;}if (!new_ref)return NULL;binder_stats_created(BINDER_STAT_REF);new_ref->data.debug_id = atomic_inc_return(&binder_last_id);new_ref->proc = proc;new_ref->node = node;rb_link_node(&new_ref->rb_node_node, parent, p);rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);// 更新binder_ref的handle值,后续客户端通过handle值找到这个binder_ref,进而找到binder_nodenew_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {ref = rb_entry(n, struct binder_ref, rb_node_desc);if (ref->data.desc > new_ref->data.desc)break;// 客户端引用服务的handle加1new_ref->data.desc = ref->data.desc + 1;}p = &proc->refs_by_desc.rb_node;while (*p) {parent = *p;ref = rb_entry(parent, struct binder_ref, rb_node_desc);if (new_ref->data.desc < ref->data.desc)p = &(*p)->rb_left;else if (new_ref->data.desc > ref->data.desc)p = &(*p)->rb_right;elseBUG();}rb_link_node(&new_ref->rb_node_desc, parent, p);rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);binder_node_lock(node);hlist_add_head(&new_ref->node_entry, &node->refs);binder_debug(BINDER_DEBUG_INTERNAL_REFS,"%d new ref %d desc %d for node %d\n",proc->pid, new_ref->data.debug_id, new_ref->data.desc,node->debug_id);binder_node_unlock(node);return new_ref;
}
4.5 把数据放到客户端的todo链表,唤醒客户端
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{......if (reply) {// 找到要回复的进程binder_inner_proc_lock(proc);in_reply_to = thread->transaction_stack;//从栈中取出binder_transaction,获得要回复给谁if (in_reply_to == NULL) {binder_inner_proc_unlock(proc);binder_user_error("%d:%d got reply transaction with no transaction stack\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;goto err_empty_call_stack;}if (in_reply_to->to_thread != thread) {spin_lock(&in_reply_to->lock);binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",proc->pid, thread->pid, in_reply_to->debug_id,in_reply_to->to_proc ?in_reply_to->to_proc->pid : 0,in_reply_to->to_thread ?in_reply_to->to_thread->pid : 0);spin_unlock(&in_reply_to->lock);binder_inner_proc_unlock(proc);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;in_reply_to = NULL;goto err_bad_call_stack;}thread->transaction_stack = in_reply_to->to_parent;//出栈binder_inner_proc_unlock(proc);binder_set_nice(in_reply_to->saved_priority);target_thread = binder_get_txn_from_and_acq_inner(in_reply_to);if (target_thread == NULL) {/* annotation for sparse */__release(&target_thread->proc->inner_lock);binder_txn_error("%d:%d reply target not found\n",thread->pid, proc->pid);return_error = BR_DEAD_REPLY;return_error_line = __LINE__;goto err_dead_binder;}if (target_thread->transaction_stack != in_reply_to) {binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",proc->pid, thread->pid,target_thread->transaction_stack ?target_thread->transaction_stack->debug_id : 0,in_reply_to->debug_id);binder_inner_proc_unlock(target_thread->proc);return_error = BR_FAILED_REPLY;return_error_param = -EPROTO;return_error_line = __LINE__;in_reply_to = NULL;target_thread = NULL;goto err_dead_binder;}// 找到要回复的进程target_proc = target_thread->proc;target_proc->tmp_ref++;binder_inner_proc_unlock(target_thread->proc);} else {// 1. 找到要发送的目的进程......}if (target_thread)e->to_thread = target_thread->pid;e->to_proc = target_proc->pid;/* TODO: reuse incoming transaction for reply */// 为binder_transcation分配内存t = kzalloc(sizeof(*t), GFP_KERNEL);.....if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;// 存储发送双方的基本信息t->from_pid = proc->pid;t->from_tid = thread->pid;t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc;t->to_thread = target_thread;t->code = tr->code;t->flags = tr->flags;t->priority = task_nice(current);......t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,tr->offsets_size, extra_buffers_size,!reply && (t->flags & TF_ONE_WAY));......t->buffer->debug_id = t->debug_id;t->buffer->transaction = t;t->buffer->target_node = target_node;t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);trace_binder_transaction_alloc_buf(t->buffer);// 把客户端的数据拷贝到目的进程test_client mmap的内存空间if (binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer,ALIGN(tr->data_size, sizeof(void *)),(const void __user *)(uintptr_t)tr->data.ptr.offsets,tr->offsets_size)) {binder_user_error("%d:%d got transaction with invalid offsets ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}......//处理server传入的binder_io.offs数据,这个数据指向用于构建binder_node实体的 flat_binder_objectfor (buffer_offset = off_start_offset; buffer_offset < off_end_offset;buffer_offset += sizeof(binder_size_t)) {struct binder_object_header *hdr;size_t object_size;struct binder_object object;binder_size_t object_offset;binder_size_t copy_size;if (binder_alloc_copy_from_buffer(&target_proc->alloc,&object_offset,t->buffer,buffer_offset,sizeof(object_offset))) {binder_txn_error("%d:%d copy offset from buffer failed\n",thread->pid, proc->pid);return_error = BR_FAILED_REPLY;return_error_param = -EINVAL;return_error_line = __LINE__;goto err_bad_offset;}/** Copy the source user buffer up to the next object* that will be processed.*/copy_size = object_offset - user_offset;if (copy_size && (user_offset > object_offset ||binder_alloc_copy_user_to_buffer(&target_proc->alloc,t->buffer, user_offset,user_buffer + user_offset,copy_size))) {binder_user_error("%d:%d got transaction with invalid data ptr\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;return_error_param = -EFAULT;return_error_line = __LINE__;goto err_copy_data_failed;}// 将指向flat_binder_object的指针拷贝给objectobject_size = binder_get_object(target_proc, user_buffer,t->buffer, object_offset, &object);if (object_size == 0 || object_offset < off_min) {binder_user_error("%d:%d got transaction with invalid offset (%lld, min %lld max %lld) or object.\n",proc->pid, thread->pid,(u64)object_offset,(u64)off_min,(u64)t->buffer->data_size);return_error = BR_FAILED_REPLY;return_error_param = -EINVAL;return_error_line = __LINE__;goto err_bad_offset;}/** Set offset to the next buffer fragment to be* copied*/user_offset = object_offset + object_size;hdr = &object.hdr;off_min = object_offset + object_size;// 此处 binder类型的是BINDER_TYPE_HANDLE,通过handle在service_manager的binder-ref中找到hello服务的binder_nodeswitch (hdr->type) {//处理binder实体case BINDER_TYPE_BINDER:case BINDER_TYPE_WEAK_BINDER: {......} break;//处理binder引用case BINDER_TYPE_HANDLE:case BINDER_TYPE_WEAK_HANDLE: {struct flat_binder_object *fp;fp = to_flat_binder_object(hdr);ret = binder_translate_handle(fp, t, thread);if (ret < 0 ||binder_alloc_copy_to_buffer(&target_proc->alloc,t->buffer,object_offset,fp, sizeof(*fp))) {binder_txn_error("%d:%d translate handle failed\n",thread->pid, proc->pid);return_error = BR_FAILED_REPLY;return_error_param = ret;return_error_line = __LINE__;goto err_translate_failed;}} break;......}......t->work.type = BINDER_WORK_TRANSACTION;if (reply) {binder_enqueue_thread_work(thread, tcomplete);binder_inner_proc_lock(target_proc);if (target_thread->is_dead) {return_error = BR_DEAD_REPLY;binder_inner_proc_unlock(target_proc);goto err_dead_proc_or_thread;}BUG_ON(t->buffer->async_transaction != 0);binder_pop_transaction_ilocked(target_thread, in_reply_to);//再次出栈// 将数据放到客户端target_thread的todo链表binder_enqueue_thread_work_ilocked(target_thread, &t->work);target_proc->outstanding_txns++;binder_inner_proc_unlock(target_proc);// 唤醒客户端wake_up_interruptible_sync(&target_thread->wait);binder_free_transaction(in_reply_to);} else if (!(t->flags & TF_ONE_WAY)) {......} else {......}if (target_thread)binder_thread_dec_tmpref(target_thread);binder_proc_dec_tmpref(target_proc);if (target_node)binder_dec_node_tmpref(target_node);/** write barrier to synchronize with initialization* of log entry*/smp_wmb();WRITE_ONCE(e->debug_id_done, t_debug_id);return;......
}static void
binder_enqueue_thread_work_ilocked(struct binder_thread *thread,struct binder_work *work)
{WARN_ON(!list_empty(&thread->waiting_thread_node));binder_enqueue_work_ilocked(work, &thread->todo);/* (e)poll-based threads require an explicit wakeup signal when* queuing their own work; they rely on these events to consume* messages without I/O block. Without it, threads risk waiting* indefinitely without handling the work.*/if (thread->looper & BINDER_LOOPER_STATE_POLL &&thread->pid == current->pid && !thread->process_todo)wake_up_interruptible_sync(&thread->wait);thread->process_todo = true;
}static void
binder_enqueue_work_ilocked(struct binder_work *work,struct list_head *target_list)
{BUG_ON(target_list == NULL);BUG_ON(work->entry.next && !list_empty(&work->entry));list_add_tail(&work->entry, target_list);
}
2.4 客户端被唤醒,获取客户端binder_ref对应的handle
handle = svcmgr_lookup(bs, svcmgr, "hello");uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name)
{uint32_t handle;unsigned iodata[512/4];struct binder_io msg, reply;bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, SVC_MGR_NAME);bio_put_string16_x(&msg, name);// ioctl到内核处理if (binder_call(bs, &msg, &reply, target, SVC_MGR_CHECK_SERVICE))return 0;// 获取引用服务binder_node的客户端binder_ref的handlehandle = bio_get_ref(&reply);if (handle)binder_acquire(bs, handle);binder_done(bs, &msg, &reply);return handle;
}uint32_t bio_get_ref(struct binder_io *bio)
{struct flat_binder_object *obj;obj = _bio_get_obj(bio);if (!obj)return 0;if (obj->type == BINDER_TYPE_HANDLE)return obj->handle;return 0;
}
3 服务注册和获取过程的简要总结图
二、服务的使用过程内核源码解析
上面我们通过源码分析,获得了客户端想要使用的服务的handle,下面我们接着分析,如何使用该服务。
1. 服务使用过程思路
有了之前Binder源码阅读的经验,我们直接看服务使用的思路,应该很容易能够理解,我们基于这个思路进行分析源码,也更容易理解源码。
2. 客户端使用服务内核源码解析
2.1 向服务端发送数据
int main(int argc, char **argv)
{int fd;struct binder_state *bs;uint32_t svcmgr = BINDER_SERVICE_MANAGER;uint32_t handle;int ret;if (argc < 2){fprintf(stderr, "Usage:\n");fprintf(stderr, "%s <hello|goodbye>\n", argv[0]);fprintf(stderr, "%s <hello|goodbye> <name>\n", argv[0]);return -1;}//打开驱动bs = binder_open(128*1024);if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;}g_bs = bs;//向service_manager发送数据,获得hello服务句柄handle = svcmgr_lookup(bs, svcmgr, "hello");if (!handle) {fprintf(stderr, "failed to get hello service\n");return -1;}g_hello_handle = handle;fprintf(stderr, "Handle for hello service = %d\n", g_hello_handle);/* 向服务端发送数据 */if (!strcmp(argv[1], "hello")){if (argc == 2) {sayhello();} else if (argc == 3) {ret = sayhello_to(argv[2]);fprintf(stderr, "get ret of sayhello_to = %d\n", ret); }}binder_release(bs, handle);return 0;
}
1. sayhello_to
以调用服务端的sayhello_to函数为例,分析客户端使用服务的过程
int sayhello_to(char *name)
{unsigned iodata[512/4];struct binder_io msg, reply;int ret;int exception;/* 构造binder_io */bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, "IHelloService");/* 放入参数 */bio_put_string16_x(&msg, name);/* 调用binder_callmsg:客户端的数据reply:携带服务端返回的数据g_hello_handle:服务端进程的handleHELLO_SVR_CMD_SAYHELLO_TO:要调用的服务端提供的服务*/if (binder_call(g_bs, &msg, &reply, g_hello_handle, HELLO_SVR_CMD_SAYHELLO_TO))return 0;/* 从reply中解析出返回值 */exception = bio_get_uint32(&reply);if (exception)ret = -1;elseret = bio_get_uint32(&reply);binder_done(g_bs, &msg, &reply);return ret;}
2. binder_call
binder_call函数,已经分析很多遍了,这里不再详细分析,贴下代码
int binder_call(struct binder_state *bs,struct binder_io *msg, struct binder_io *reply,uint32_t target, uint32_t code)
{int res;struct binder_write_read bwr;struct {uint32_t cmd;struct binder_transaction_data txn;} __attribute__((packed)) writebuf;unsigned readbuf[32];if (msg->flags & BIO_F_OVERFLOW) {fprintf(stderr,"binder: txn buffer overflow\n");goto fail;}writebuf.cmd = BC_TRANSACTION;//ioclt类型writebuf.txn.target.handle = target;//数据发送给哪个进程writebuf.txn.code = code;//调用进程的哪个函数writebuf.txn.flags = 0;writebuf.txn.data_size = msg->data - msg->data0;//数据本身大小writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0);//数据头大小,指向binder_node实体(发送端提供服务函数的地址),bio_put_obj(&msg, ptr);writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0;//指向数据本身内存起点writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0;//指向数据头内存起点bwr.write_size = sizeof(writebuf);bwr.write_consumed = 0;bwr.write_buffer = (uintptr_t) &writebuf;hexdump(msg->data0, msg->data - msg->data0);for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//调用ioctl发送数据给驱动程序if (res < 0) {fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno));goto fail;}res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0);if (res == 0) return 0;if (res < 0) goto fail;}fail:memset(reply, 0, sizeof(*reply));reply->flags |= BIO_F_IOERROR;return -1;
}
3. binder_ioctl
用户态的ioctl调用到内核Binder驱动程序binder_ioctl函数,这个函数也分析很多遍了,相信看到这儿的博友,对这些函数已经很熟悉了。
// res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);进入binder驱动程序
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{int ret;struct binder_proc *proc = filp->private_data;struct binder_thread *thread;void __user *ubuf = (void __user *)arg;/*pr_info("binder_ioctl: %d:%d %x %lx\n",proc->pid, current->pid, cmd, arg);*/binder_selftest_alloc(&proc->alloc);trace_binder_ioctl(cmd, arg);ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);if (ret)goto err_unlocked;//为进程proc创建binder_threadthread = binder_get_thread(proc);if (thread == NULL) {ret = -ENOMEM;goto err;}switch (cmd) {case BINDER_WRITE_READ:ret = binder_ioctl_write_read(filp, arg, thread);if (ret)goto err;break;......}......
}
4. binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp, unsigned long arg,struct binder_thread *thread)
{int ret = 0;struct binder_proc *proc = filp->private_data;void __user *ubuf = (void __user *)arg;struct binder_write_read bwr;//从用户空间拷贝数据到内核空间(这部分内核空间被mmap映射到了目标进程)if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto out;}binder_debug(BINDER_DEBUG_READ_WRITE,"%d:%d write %lld at %016llx, read %lld at %016llx\n",proc->pid, thread->pid,(u64)bwr.write_size, (u64)bwr.write_buffer,(u64)bwr.read_size, (u64)bwr.read_buffer);if (bwr.write_size > 0) {ret = binder_thread_write(proc, thread,bwr.write_buffer,bwr.write_size,&bwr.write_consumed);trace_binder_write_done(ret);if (ret < 0) {bwr.read_consumed = 0;if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto out;}}if (bwr.read_size > 0) {......}binder_debug(BINDER_DEBUG_READ_WRITE,"%d:%d wrote %lld of %lld, read return %lld of %lld\n",proc->pid, thread->pid,(u64)bwr.write_consumed, (u64)bwr.write_size,(u64)bwr.read_consumed, (u64)bwr.read_size);if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {ret = -EFAULT;goto out;}
out:return ret;
}
5. binder_thread_write
static int binder_thread_write(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed)
{uint32_t cmd;struct binder_context *context = proc->context;// 获取数据buffer,根据上面总结的发送数据可知,这个buffer由cmd和binder_transcation_data两部分数据组成void __user *buffer = (void __user *)(uintptr_t)binder_buffer;// 发送来的数据consumed=0,因此ptr指向用户空间数据buffer的起点void __user *ptr = buffer + *consumed;// 指向数据buffer的末尾void __user *end = buffer + size;// 逐个读取客户端发送来的数据(cmd+binder_transcation_data)while (ptr < end && thread->return_error.cmd == BR_OK) {int ret;// 获取用户空间中buffer的cmd值if (get_user(cmd, (uint32_t __user *)ptr))return -EFAULT;// 移动指针到cmd的位置之后,指向binder_transcation_data数据的内存起点ptr += sizeof(uint32_t);trace_binder_command(cmd);if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);}// 根据上面总结的发送数据可知,cmd是BC_TRANSACTIONswitch (cmd) {....../*BC_TRANSACTION:进程发送信息的cmdBR_TRANSACTION:进程接收BC_TRANSACTION发送信息的cmdBC_REPLY:进程回复信息的cmdBR_REPLY:进程接收BC_REPLY回复信息的cmd*/case BC_TRANSACTION:case BC_REPLY: {struct binder_transaction_data tr;// 从用户空间拷贝binder_transaction_data到内核空间if (copy_from_user(&tr, ptr, sizeof(tr)))return -EFAULT;// 移动指针到binder_transaction_data的位置之后,指向下一个cmd数据的内存起点ptr += sizeof(tr);// 处理binder_transaction_data数据binder_transaction(proc, thread, &tr,cmd == BC_REPLY, 0);break;}}}......
}int get_user(int *val, const int __user *ptr) {if (copy_from_user(val, ptr, sizeof(int))) {return -EFAULT; // 返回错误码}return 0; // 成功
}
6. binder_transaction
这个函数也分析了很多很多遍了,这里简要分析下
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{int ret;struct binder_transaction *t;struct binder_work *w;struct binder_work *tcomplete;binder_size_t buffer_offset = 0;binder_size_t off_start_offset, off_end_offset;binder_size_t off_min;binder_size_t sg_buf_offset, sg_buf_end_offset;binder_size_t user_offset = 0;struct binder_proc *target_proc = NULL;struct binder_thread *target_thread = NULL;struct binder_node *target_node = NULL;struct binder_transaction *in_reply_to = NULL;struct binder_transaction_log_entry *e;uint32_t return_error = 0;uint32_t return_error_param = 0;uint32_t return_error_line = 0;binder_size_t last_fixup_obj_off = 0;binder_size_t last_fixup_min_off = 0;struct binder_context *context = proc->context;int t_debug_id = atomic_inc_return(&binder_last_id);ktime_t t_start_time = ktime_get();char *secctx = NULL;u32 secctx_sz = 0;struct list_head sgc_head;struct list_head pf_head;// 客户端发送来的数据bufferconst void __user *user_buffer = (const void __user *)(uintptr_t)tr->data.ptr.buffer;INIT_LIST_HEAD(&sgc_head);INIT_LIST_HEAD(&pf_head);e = binder_transaction_log_add(&binder_transaction_log);e->debug_id = t_debug_id;e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);e->from_proc = proc->pid;e->from_thread = thread->pid;e->target_handle = tr->target.handle;e->data_size = tr->data_size;e->offsets_size = tr->offsets_size;strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);binder_inner_proc_lock(proc);binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);binder_inner_proc_unlock(proc);if (reply) {//找到要回复的进程......} else {//1. 找到要发送的目的进程if (tr->target.handle) {// tr->target.handle == 0 代表是service_manager进程,否则是其它进程struct binder_ref *ref;/** There must already be a strong ref* on this node. If so, do a strong* increment on the node to ensure it* stays alive until the transaction is* done.*/binder_proc_lock(proc);//根据客户端发送来的handle找到获取binder_refref = binder_get_ref_olocked(proc, tr->target.handle,true);if (ref) {// 根据binder_ref拿到目的进程的binder_node和binder_proctarget_node = binder_get_node_refs_for_txn(ref->node, &target_proc,&return_error);} else {binder_user_error("%d:%d got transaction to invalid handle, %u\n",proc->pid, thread->pid, tr->target.handle);return_error = BR_FAILED_REPLY;}binder_proc_unlock(proc);} else {//处理service_manager进程......}......binder_inner_proc_unlock(proc);}......t->work.type = BINDER_WORK_TRANSACTION;if (reply) {......} else if (!(t->flags & TF_ONE_WAY)) {BUG_ON(t->buffer->async_transaction != 0);binder_inner_proc_lock(proc);/** Defer the TRANSACTION_COMPLETE, so we don't return to* userspace immediately; this allows the target process to* immediately start processing this transaction, reducing* latency. We will then return the TRANSACTION_COMPLETE when* the target replies (or there is an error).*/binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);t->need_reply = 1;t->from_parent = thread->transaction_stack;//入栈thread->transaction_stack = t;binder_inner_proc_unlock(proc);//将数据放入目的进程的binder_proc或binder_thread,并唤醒目的进程return_error = binder_proc_transaction(t,target_proc, target_thread);if (return_error) {binder_inner_proc_lock(proc);binder_pop_transaction_ilocked(thread, t);binder_inner_proc_unlock(proc);goto err_dead_proc_or_thread;}} else {......}
2.2 服务端被被唤醒,处理客户端发送的数据
服务端的binder_loop函数中有一个死循环,一直在等待数据,现在数据来了,可以开始读取和处理数据了
int main(int argc, char **argv)
{int fd;struct binder_state *bs;uint32_t svcmgr = BINDER_SERVICE_MANAGER;uint32_t handle;int ret;bs = binder_open(128*1024);if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;}/* add service */ret = svcmgr_publish(bs, svcmgr, "hello", hello_service_handler);if (ret) {fprintf(stderr, "failed to publish hello service\n");return -1;}ret = svcmgr_publish(bs, svcmgr, "goodbye", goodbye_service_handler);if (ret) {fprintf(stderr, "failed to publish goodbye service\n");}#if 0while (1){/* read data *//* parse data, and process *//* reply */}
#endifbinder_set_maxthreads(bs, 10);// 死循环等待读取客户端发送来的数据binder_loop(bs, test_server_handler);return 0;
}void binder_loop(struct binder_state *bs, binder_handler func)
{int res;struct binder_write_read bwr;uint32_t readbuf[32];bwr.write_size = 0;bwr.write_consumed = 0;bwr.write_buffer = 0;readbuf[0] = BC_ENTER_LOOPER;binder_write(bs, readbuf, sizeof(uint32_t));// 死循环等待读取客户端的数据for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;// 读取客户端发送来的数据(这个内核源码过程和上面service_manager被唤醒后的过程一样,不再赘述)res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);if (res < 0) {ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));break;}// 解析客户端发送来的数据res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);if (res == 0) {ALOGE("binder_loop: unexpected reply?!\n");break;}if (res < 0) {ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));break;}}
}int binder_parse(struct binder_state *bs, struct binder_io *bio,uintptr_t ptr, size_t size, binder_handler func)
{int r = 1;uintptr_t end = ptr + (uintptr_t) size;while (ptr < end) {uint32_t cmd = *(uint32_t *) ptr;ptr += sizeof(uint32_t);
#if TRACEfprintf(stderr,"%s:\n", cmd_name(cmd));
#endifswitch(cmd) {......case BR_TRANSACTION: {struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: txn too small!\n");return -1;}binder_dump_txn(txn);if (func) {unsigned rdata[256/4];struct binder_io msg;struct binder_io reply;int res;bio_init(&reply, rdata, sizeof(rdata), 4);bio_init_from_txn(&msg, txn);// 这里的msg就是客户端发送来的数据,func是服务端的函数test_server_handlerres = func(bs, txn, &msg, &reply);binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);}ptr += sizeof(*txn);break;}......}}return r;
}int test_server_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{int (*handler)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply);handler = (int (*)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply))txn->target.ptr; // 服务端的函数指针,在向service_manager注册服务的时候写入的,此处是hello_service_handlerreturn handler(bs, txn, msg, reply);
}int hello_service_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{/* 根据txn->code知道要调用哪一个函数* 如果需要参数, 可以从msg取出* 如果要返回结果, 可以把结果放入reply*//* sayhello* sayhello_to*/uint16_t *s;char name[512];size_t len;uint32_t handle;uint32_t strict_policy;int i;// Equivalent to Parcel::enforceInterface(), reading the RPC// header with the strict mode policy mask and the interface name.// Note that we ignore the strict_policy and don't propagate it// further (since we do no outbound RPCs anyway).strict_policy = bio_get_uint32(msg);switch(txn->code) {case HELLO_SVR_CMD_SAYHELLO:sayhello();bio_put_uint32(reply, 0); /* no exception */return 0;case HELLO_SVR_CMD_SAYHELLO_TO:/* 从msg里取出字符串 */s = bio_get_string16(msg, &len); //"IHelloService"s = bio_get_string16(msg, &len); // nameif (s == NULL) {return -1;}for (i = 0; i < len; i++)name[i] = s[i];name[i] = '\0';/* 调用服务函数处理客户端的数据 */i = sayhello_to(name);/* 把结果放入reply */bio_put_uint32(reply, 0); /* no exception */bio_put_uint32(reply, i);break;default:fprintf(stderr, "unknown code %d\n", txn->code);return -1;}return 0;
}
2.3 客户端收到服务端处理的数据
从reply中获取服务端处理后的数据
int sayhello_to(char *name)
{unsigned iodata[512/4];struct binder_io msg, reply;int ret;int exception;/* 构造binder_io */bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, "IHelloService");/* 放入参数 */bio_put_string16_x(&msg, name);/* 调用binder_call */if (binder_call(g_bs, &msg, &reply, g_hello_handle, HELLO_SVR_CMD_SAYHELLO_TO))return 0;/* 从reply中解析出返回值 */exception = bio_get_uint32(&reply);if (exception)ret = -1;elseret = bio_get_uint32(&reply);binder_done(g_bs, &msg, &reply);return ret;}
三、后记
自此,我们通过三篇文章完成了Binder跨进程通信的源码分析,我们深入内核去分析Binder跨进程通信实现的源码,相信通过这三篇文章,我们已经非常深入地理解了Binder跨进程通信的实现方案。其实内核的源码我们只是分析了大概,更加细节的源码没有深入分析,但我相信有了现在的基础,我们再独立去更深入的分析内核源码,应该会轻松不少,至少不会像无头苍蝇一下,无从下手。
说实话,Android Binder的内核源码分析,我自认为写的不是很好,还有很多改善空间,还有很多没有讲清楚的地方,后面我也会不断加强自己的技术能力,希望未来有一天,我会再重新写一篇更加通俗易懂的Binder驱动内核源码分析。
相关文章:
深入内核讲明白Android Binder【三】
深入内核讲明白Android Binder【三】 前言一、服务的获取过程内核源码解析1. 客户端获取服务的用户态源码回顾2. 客户端获取服务的内核源码分析2.1 客户端向service_manager发送数据1. binder_ioctl2. binder_ioctl_write_read3. binder_thread_write4. binder_transaction4.1 …...
vim如何显示行号
:set nu 显示行号 :set nonu 不显示行号...
【线性代数】列主元法求矩阵的逆
列主元方法是一种用于求解矩阵逆的数值方法,特别适用于在计算机上实现。其基本思想是通过高斯消元法将矩阵转换为上三角矩阵,然后通过回代求解矩阵的逆。以下是列主元方法求解矩阵 A A A 的逆的步骤: [精确算法] 列主元高斯消元法 步骤 1&am…...
T-SQL语言的数据库编程
T-SQL语言的数据库编程 1. 引言 在信息化迅速发展的今天,数据库已经成为数据管理和使用的重要工具。其中,T-SQL(Transact-SQL)作为微软SQL Server的扩展SQL语言,不仅用于数据查询和管理,还能够进行复杂的…...
【Linux】18.Linux进程控制(2)
文章目录 3. 进程程序替换3.1 单进程版 -- 看看程序替换3.2 替换原理3.3 替换函数函数解释命名理解 3.4 多进程版 -- 验证各种程序替换接口3.5 自定义shell 3. 进程程序替换 3.1 单进程版 – 看看程序替换 makefile mycommand:mycommand.cgcc -o $ $^ -stdc99 .PHONY:clean …...
在 Ubuntu 上安装 MySQL 的详细指南
在Ubuntu环境中安装 mysql-server 以及 MySQL 开发包(包括头文件和动态库文件),并处理最新版本MySQL初始自动生成的用户名和密码,可以通过官方的APT包管理器轻松完成。以下是详细的步骤指南,包括从官方仓库和MySQL官方…...
Hive: Hive的优缺点,使用方式,判断Hive是否启动(jps),元数据的存储,Hive和Hadoop的关系
hive 是一个构建在 Hadoop 上的数据仓库 工具 ( 框架 ) ,可以将结构化的数据文件映射成一张数据表,并可以使用类sql 的方式来对这样的数据文件进行读,写以及管理(包括元数据)。这套 HIVE SQL 简称 HQL。 hive 的执行引…...
Social LSTM:Human Trajectory Prediction in Crowded Spaces | 文献翻译
概要 行人遵循不同轨迹以避免障碍物和容纳同行者。任何在这种场景中巡航的自动驾驶车辆都需要能够遇见行人的未来位置并相应地调整其路线以避免碰撞。轨迹预测问题能够被看作一个顺序生成任务,其中我们对基于行人过去的位置预测其未来的轨迹感兴趣。根据最近RNN&am…...
前后端交互过程
一、前后端交互过程 前后端交互是指客户端(前端)与服务器(后端)之间的数据通信。以下是一个典型的前后端交互流程: 前端请求: 用户在浏览器上与前端界面交互,如点击按钮、提交表单。前端使用 A…...
【计算机视觉】人脸识别
一、简介 人脸识别是将图像或者视频帧中的人脸与数据库中的人脸进行对比,判断输入人脸是否与数据库中的某一张人脸匹配,即判断输入人脸是谁或者判断输入人脸是否是数据库中的某个人。 人脸识别属于1:N的比对,输入人脸身份是1&…...
Spark Streaming的核心功能及其示例PySpark代码
Spark Streaming是Apache Spark中用于实时流数据处理的模块。以下是一些常见功能的实用PySpark代码示例: 基础流处理:从TCP套接字读取数据并统计单词数量 from pyspark import SparkContext from pyspark.streaming import StreamingContext# 创建Spar…...
高效实现 Markdown 转 PDF 的跨平台指南20250117
高效实现 Markdown 转 PDF 的跨平台指南 引言 Markdown 文件以其轻量化和灵活性受到开发者和技术写作者的青睐,但如何将其转换为易于分享和打印的 PDF 格式,是一个常见需求。本文整合了 macOS、Windows 和 Linux 三大平台的转换方法,并探讨…...
冯诺依曼架构和哈佛架构的主要区别?
冯诺依曼架构(Von Neumann Architecture)和哈佛架构(Harvard Architecture)是两种计算机体系结构,它们在存储器组织、指令处理和数据存取等方面有明显的不同。以下是它们的主要区别: 1.存储器结构 冯诺依曼…...
AI 新动态:技术突破与应用拓展
目录 一.大语言模型的持续进化 二.AI 在医疗领域的深度应用 疾病诊断 药物研发 三.AI 与自动驾驶的新进展 四.AI 助力环境保护 应对气候变化 能源管理 后记 在当下科技迅猛发展的时代,人工智能(AI)无疑是最具影响力的领域之一。AI 技…...
Java锁 从乐观锁和悲观锁开始讲 面试复盘
目录 面试复盘 Java 中的锁 大全 悲观锁 专业解释 自我理解 乐观锁 专业解释 自我理解 悲观锁的调用 乐观锁的调用 synchronized和 ReentrantLock的区别 相同点 区别 详细对比 总结 面试复盘 Java 中的锁 大全 悲观锁 专业解释 适合写操作多的场景 先加锁可以…...
【RabbitMq】RabbitMq高级特性-延迟消息
延迟消息 什么是延迟消息死信交换机延迟消息插件-DelayExchange其他文章 什么是延迟消息 延迟消息:发送者发送消息时指定一个时间,消费者不会立刻收到消息,而是在指定时间之后才收到消息。 延迟任务:设置在一定时间之后才执行的任…...
MindAgent:基于大型语言模型的多智能体协作基础设施
2023-09-18 ,加州大学洛杉矶分校(UCLA)、微软研究院、斯坦福大学等机构共同创建的新型基础设施,目的在评估大型语言模型在游戏互动中的规划和协调能力。MindAgent通过CuisineWorld这一新的游戏场景和相关基准,调度多智…...
Linux内存管理(Linux内存架构,malloc,slab的实现)
文章目录 前言一、Linux进程空间内存分配二、malloc的实现机理三、物理内存与虚拟内存1.物理内存2.虚拟内存 四、磁盘和物理内存区别五、页页的基本概念:分页管理的核心概念:Linux 中分页的实现:总结: 六、伙伴算法伙伴算法的核心…...
【机器学习实战中阶】比特币价格预测
比特币价格预测项目介绍 比特币价格预测项目是一个非常有实用价值的机器学习项目。随着区块链技术的快速发展,越来越多的数字货币如雨后春笋般涌现,尤其是比特币作为最早的加密货币,其价格波动备受全球投资者和研究者的关注。本项目的目标是…...
【JVM-9】Java性能调优利器:jmap工具使用指南与应用案例
在Java应用程序的性能调优和故障排查中,jmap(Java Memory Map)是一个不可或缺的工具。它可以帮助开发者分析Java堆内存的使用情况,生成堆转储文件(Heap Dump),并查看内存中的对象分布。无论是内…...
使用vscode在本地和远程服务器端运行和调试Python程序的方法总结
1 官网下载 下载网址:https://code.visualstudio.com/Download 如下图所示,可以分别下载Windows,Linux,macOS版本 历史版本下载链接: https://code.visualstudio.com/updates 2 安装Python扩展工具 打开 VS Code,安装 Microsoft 提供的官…...
AI 编程工具—Cursor 对话模式详解 Chat、Composer 与 Normal/Agent 模式
Cursor AI 对话模式详解:Chat、Composer 与 Normal/Agent 模式 一、简介 Cursor 是一个强大的 AI 辅助编程工具,它提供了多种对话模式来满足不同的开发需求。主要包括: Chat 模式:直接对话交互Composer 模式:结构化编程助手Normal/Agent 模式:不同的 AI 响应策略打开Ch…...
【MySQL】数据库基础知识
欢迎拜访:雾里看山-CSDN博客 本篇主题:【MySQL】数据库基础知识 发布时间:2025.1.21 隶属专栏:MySQL 目录 什么是数据库为什么要有数据库数据库的概念 主流数据库mysql的安装mysql登录使用一下mysql显示数据库内容创建一个数据库创…...
ChatGPT开发教程指南
ChatGPT开发教程指南 一、ChatGPT 概述二、开发环境搭建(一)硬件要求(二)软件要求 三、开发流程(一)数据处理(二)模型选择与训练(三)接口开发 四、示例代码 随…...
OpenEuler学习笔记(四):OpenEuler与CentOS的区别在那里?
OpenEuler与CentOS的对比 一、基本信息 起源与背景: OpenEuler:由华为发起,后捐赠给开放原子开源基金会,旨在构建一个开放、多元化的云计算和边缘计算平台,以满足华为及其他企业的硬件和软件需求。CentOS:…...
spring cloud如何实现负载均衡
在Spring Cloud中,实际上并没有直接支持lb:\\这样的URL前缀来自动解析为负载均衡的服务地址。lb:\\这样的表示可能是在某些特定框架、文档或示例中自定义的,但它并不是Spring Cloud官方API或规范的一部分。 Spring Cloud实现负载均衡的方式通常依赖于服…...
LeetCode:37. 解数独
跟着carl学算法,本系列博客仅做个人记录,建议大家都去看carl本人的博客,写的真的很好的! 代码随想录 LeetCode:37. 解数独 编写一个程序,通过填充空格来解决数独问题。 数独的解法需 遵循如下规则ÿ…...
如何在idea中搭建SpringBoot项目
如何在idea中快速搭建SpringBoot项目 目录 如何在idea中快速搭建SpringBoot项目前言一、环境准备:搭建前的精心布局 1.下载jdk (1)安装JDK:(2)运行安装程序:(3)设置安装…...
STM32补充——FLASH
目录 1.内部FLASH构成(F1) 2.FLASH读写过程(F1) 2.1内存的读取 2.2闪存的写入 2.3FLASH接口寄存器(写入 & 擦除相关) 3.FLASH相关HAL库函数简介(F1/F4/F7/H7) 4.编程实战 …...
ASP.NET Core 中的 JWT 鉴权实现
在当今的软件开发中,安全性和用户认证是至关重要的方面。JSON Web Token(JWT)作为一种流行的身份验证机制,因其简洁性和无状态特性而被广泛应用于各种应用中,尤其是在 ASP.NET Core 项目里。本文将详细介绍如何在 ASP.…...
Docker配置国内镜像源
访问docker hub需要科学上网 在 Docker 中配置镜像地址(即镜像加速器)可以显著提升拉取镜像的速度,尤其是在国内访问 Docker Hub 时。以下是详细的配置方法: 1. 配置镜像加速器 Docker 支持通过修改配置文件来添加镜像加速器地址…...
qiankun+vite+vue3
基座与子应用代码示例 本示例中,基座为Vue3,子应用也是Vue3,由于qiankun不支持Vite构建的项目,这里还要引入 vite-plugin-qiankun 插件 基座(主应用) 加载qiankun依赖 npm i qiankun -S qiankun配置(src/qiankun) src/qiankun/config.ts export default {subApp…...
如何使用AI工具cursor(内置ChatGPT 4o+claude-3.5)
⚠️温馨提示: 禁止商业用途,请支持正版,充值使用,尊重知识产权! 免责声明: 1、本教程仅用于学习和研究使用,不得用于商业或非法行为。 2、请遵守Cursor的服务条款以及相关法律法规。 3、本…...
Linux内核编程(二十一)USB驱动开发-键盘驱动
一、驱动类型 USB 驱动开发主要分为两种:主机侧的驱动程序和设备侧的驱动程序。一般我们编写的都是主机侧的USB驱动程序。 主机侧驱动程序用于控制插入到主机中的 USB 设备,而设备侧驱动程序则负责控制 USB 设备如何与主机通信。由于设备侧驱动程序通常与…...
vue3+ts watch 整理
watch() 一共可以接受三个参数,侦听数据源、回调函数和配置选项 作用:监视数据的变化(和Vue2中的watch作用一致) 特点:Vue3中的watch只能监视以下四种数据: ref定义的数据。 reactive定义的数据。 函数返…...
2025年最新深度学习环境搭建:Win11+ cuDNN + CUDA + Pytorch +深度学习环境配置保姆级教程
本文目录 一、查看驱动版本1.1 查看显卡驱动1.2 显卡驱动和CUDA对应版本1.3 Pytorch和Python对应的版本1.4 Pytorch和CUDA对应的版本 二、安装CUDA三、安装cuDANN四、安装pytorch五、验证是否安装成功 一、查看驱动版本 1.1 查看显卡驱动 输入命令nvidia-smi可以查看对应的驱…...
USART_串口通讯轮询案例(HAL库实现)
引言 前面讲述的串口通讯案例是使用寄存器方式实现的,有利于深入理解串口通讯底层原理,但其开发效率较低;对此,我们这里再讲基于HAL库实现的串口通讯轮询案例,实现高效开发。当然,本次案例需求仍然和前面寄…...
CAN 网络介绍
背景 在T-Box 产品开发过程中,我们离不开CAN总线,因为CAN总线为我们提供了车身的相关数据,比如,车速、油耗、温度等。用于上报TSP平台,进行国标认证;也帮助我们进行车身控制,比如车门解锁/闭锁…...
pytorch 多机多卡训练方法
在深度学习训练中,使用多机多卡(多台机器和多块 GPU)可以显著加速模型训练过程。 PyTorch 提供了多种方法来实现多机多卡训练,以下是一些常用的方法和步骤: 1. 使用 torch.distributed 包 PyTorch 的 torch.distribut…...
【智能控制】年末总结,模糊控制,神经网络控制,专家控制,遗传算法
关注作者了解更多 我的其他CSDN专栏 毕业设计 求职面试 大学英语 过程控制系统 工程测试技术 虚拟仪器技术 可编程控制器 工业现场总线 数字图像处理 智能控制 传感器技术 嵌入式系统 复变函数与积分变换 单片机原理 线性代数 大学物理 热工与工程流体力学 …...
Linux系统 C/C++编程基础——使用make工具和Makefile实现自动编译
ℹ️大家好,我是练小杰,今天周二了,距离除夕只有6天了,新的一年就快到了😆 本文是有关Linux C/C编程的make和Makefile实现自动编译相关知识点,后续会不断添加相关内容 ~~ 回顾:【Emacs编辑器、G…...
kafka学习笔记7 性能测试 —— 筑梦之路
kafka 不同的参数配置对 kafka 性能都会造成影响,通常情况下集群性能受分区、磁盘和线程等影响因素,因此需要进行性能测试,找出集群性能瓶颈和最佳参数。 # 生产者和消费者的性能测试工具 kafka-producer-perf-test.sh kafka-consumer-perf-t…...
C#与AI的共同发展
C#与人工智能(AI)的共同发展反映了编程语言随着技术进步而演变,以适应新的挑战和需要。自2000年微软推出C#以来,这门语言经历了多次迭代,不仅成为了.NET平台的主要编程语言之一,还逐渐成为构建各种类型应用程序的强大工具。随着时…...
multus使用教程
操作步骤如下: 1.在vmware vsphere上配置所有主机使用的端口组安全项 Forged transmits 设置为: Accept Promiscuous Mode 设置为:Accept Promiscuous Mode(混杂模式)和Forged Transmits(伪传输)…...
用JAVA写算法之输入输出篇
本系列适合原来用C语言或其他语言写算法,但是因为找工作或比赛的原因改用JAVA语言写算法的同学。当然也同样适合初学算法,想用JAVA来写算法题的同学。 常规方法:使用Scanner类和System.out 这种方法适用于leetcode,以及一些面试手…...
场馆预定平台高并发时间段预定实现V2
🎯 本文档介绍了场馆预订系统接口V2的设计与实现,旨在解决V1版本中库存数据不一致及性能瓶颈的问题。通过引入令牌机制确保缓存和数据库库存的最终一致性,避免因服务器故障导致的库存错误占用问题。同时,采用消息队列异步处理库存…...
(1)STM32 USB设备开发-基础知识
开篇感谢: 【经验分享】STM32 USB相关知识扫盲 - STM32团队 ST意法半导体中文论坛 单片机学习记录_桃成蹊2.0的博客-CSDN博客 USB_不吃鱼的猫丿的博客-CSDN博客 1、USB鼠标_哔哩哔哩_bilibili usb_冰糖葫的博客-CSDN博客 USB_lqonlylove的博客-CSDN博客 USB …...
Spring Boot 整合 ShedLock 处理定时任务重复执行的问题
🌷 古之立大事者,不惟有超世之才,亦必有坚忍不拔之志 🎐 个人CSND主页——Micro麦可乐的博客 🐥《Docker实操教程》专栏以最新的Centos版本为基础进行Docker实操教程,入门到实战 🌺《RabbitMQ》…...
缓存之美:万文详解 Caffeine 实现原理(上)
由于社区最大字数限制,本文章将分为两篇,第二篇文章为缓存之美:万文详解 Caffeine 实现原理(下) 大家好,我是 方圆。文章将采用“总-分-总”的结构对配置固定大小元素驱逐策略的 Caffeine 缓存进行介绍&…...
PHP语言的网络编程
PHP语言的网络编程 网络编程是现代软件开发中不可或缺的一部分,尤其是在日益发展的互联网时代。PHP(Hypertext Preprocessor)是一种广泛使用的开源脚本语言,专门用于Web开发。它的灵活性、易用性以及强大的社区支持使得PHP在网络…...