当前位置: 首页 > news >正文

第J2周:ResNet50V2算法实战与解析

文章目录

  • 一、准备工作
    • 1.设置GPU
    • 2.导入数据
    • 3.查看数据
  • 二、数据预处理
    • 1.加载数据
    • 2.可视化数据
  • 总结

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

一、准备工作

1.设置GPU

import tensorflow as tf gpus = tf.config.list_physical_devices("GPU")if gpus:tf.config.experimental.set_memory_growth(gpus[0], True) # 设置GPUtf.config.set_visible_devices([gpus[0]], "GPU")

2.导入数据

import matplotlib.pyplot as plt
# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False   # 用来正常显示负号import os,PIL,pathlib
import numpy as npfrom tensorflow import keras 
from tensorflow.keras import layers, models
data_dir = "8/bird_photos"data_dir = pathlib.Path(data_dir)

3.查看数据

image_count = len(list(data_dir.glob('*/*')))print("图片总数为:", image_count)

图片总数为: 565

二、数据预处理

1.加载数据

batch_size = 8
img_height = 224
img_width = 224
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="training",seed=123,image_size=(img_height, img_width),batch_size=batch_size)

Found 565 files belonging to 4 classes.
Using 452 files for training.

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="validation",seed=123,image_size=(img_height, img_width),batch_size=batch_size)

Found 565 files belonging to 4 classes.
Using 113 files for validation.

class_names = train_ds.class_names
print(class_names)

[‘Bananaquit’, ‘Black Skimmer’, ‘Black Throated Bushtiti’, ‘Cockatoo’]

2.可视化数据

plt.figure(figsize=(10, 5))  # 图形的宽为10高为5
plt.suptitle("微信公众号:K同学啊")for images, labels in train_ds.take(1):for i in range(8):ax = plt.subplot(2, 4, i + 1)  plt.imshow(images[i].numpy().astype("uint8"))plt.title(class_names[labels[i]])plt.axis("off")

在这里插入图片描述

plt.imshow(images[1].numpy().astype("uint8"))
<matplotlib.image.AxesImage at 0x16a64ec80>

在这里插入图片描述

3.再次检查数据

for image_batch, labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break

(8, 224, 224, 3)
(8,)

2025-02-14 10:51:27.038555: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_4’ with dtype int32 and shape [452]
[[{{node Placeholder/_4}}]]
2025-02-14 10:51:27.039049: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor ‘Placeholder/_4’ with dtype int32 and shape [452]
[[{{node Placeholder/_4}}]]

4.配置数据集

AUTOTUNE = tf.data.AUTOTUNEtrain_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

四、模型复现

tf.keras.applications.resnet_v2.ResNet50V2(include_top=True,weights='imagenet',input_tensor=None,input_shape=None,pooling=None,classes=1000,classifier_activation='softmax'
)

<keras.engine.functional.Functional at 0x16ab97a60>

import tensorflow as tf
import tensorflow.keras.layers as layers
from tensorflow.keras.models import Model

1.Residual Block

def block2(x, filters, kernel_size=3, stride=1, conv_shortcut=False, name=None):preact = layers.BatchNormalization(name=name + '_preact_bn')(x)preact = layers.Activation('relu', name=name + '_preact_relu')(preact)if conv_shortcut:shortcut = layers.Conv2D(4 * filters, 1, strides=stride, name=name + '_0_conv')(preact)else:shortcut = layers.MaxPooling2D(1, strides=stride)(x) if stride > 1 else xx = layers.Conv2D(filters, 1, strides=1, use_bias=False, name=name + '_1_conv')(preact)x = layers.BatchNormalization(name=name + '_1_bn')(x)x = layers.Activation('relu', name=name + '_1_relu')(x)x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name=name + '_2_pad')(x)x = layers.Conv2D(filters,kernel_size,strides=stride,use_bias=False,name=name + '_2_conv')(x)x = layers.BatchNormalization(name=name + '_2_bn')(x)x = layers.Activation('relu', name=name + '_2_relu')(x)x = layers.Conv2D(4 * filters, 1, name=name + '_3_conv')(x)x = layers.Add(name=name + '_out')([shortcut, x])return x

2.堆叠Residual Block

def stack2(x, filters, blocks, stride1=2, name=None):x = block2(x, filters, conv_shortcut=True, name=name + '_block1')for i in range(2, blocks):x = block2(x, filters, name=name + '_block' + str(i))x = block2(x, filters, stride=stride1, name=name + '_block' + str(blocks))return x

3.ResNet50V2架构复现

def ResNet50V2(include_top=True,  # 是否包含位于网络顶部的全连接层preact=True,  # 是否使用预激活use_bias=True,  # 是否对卷积层使用偏置weights='imagenet',input_tensor=None,  # 可选的keras张量,用作模型的图像输入input_shape=None,pooling=None,classes=1000,  # 用于分类图像的可选类数classifier_activation='softmax'):  # 分类层激活函数img_input = layers.Input(shape=input_shape)x = layers.ZeroPadding2D(padding=((3, 3), (3, 3)), name='conv1_pad')(img_input)x = layers.Conv2D(64, 7, strides=2, use_bias=use_bias, name='conv1_conv')(x)if not preact:x = layers.BatchNormalization(name='conv1_bn')(x)x = layers.Activation('relu', name='conv1_relu')(x)x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name='pool1_pad')(x)x = layers.MaxPooling2D(3, strides=2, name='pool1_pool')(x)x = stack2(x, 64, 3, name='conv2')x = stack2(x, 128, 4, name='conv3')x = stack2(x, 256, 6, name='conv4')x = stack2(x, 512, 3, stride1=1, name='conv5')if preact:x = layers.BatchNormalization(name='post_bn')(x)x = layers.Activation('relu', name='post_relu')(x)if include_top:x = layers.GlobalAveragePooling2D(name='avg_pool')(x)x = layers.Dense(classes, activation=classifier_activation, name='predictions')(x)else:if pooling == 'avg':# GlobalAveragePooling2D就是将每张图片的每个通道值各自加起来再求平均,# 最后结果是没有了宽高维度,只剩下个数与平均值两个维度。# 可以理解为变成了多张单像素图片。x = layers.GlobalAveragePooling2D(name='avg_pool')(x)elif pooling == 'max':x = layers.GlobalMaxPooling2D(name='max_pool')(x)model = Model(img_input, x)return model
if __name__ == '__main__':model = ResNet50V2(input_shape=(224, 224, 3))model.summary()
Model: "model"
__________________________________________________________________________________________________Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================input_2 (InputLayer)           [(None, 224, 224, 3  0           []                               )]                                                                conv1_pad (ZeroPadding2D)      (None, 230, 230, 3)  0           ['input_2[0][0]']                conv1_conv (Conv2D)            (None, 112, 112, 64  9472        ['conv1_pad[0][0]']              )                                                                 pool1_pad (ZeroPadding2D)      (None, 114, 114, 64  0           ['conv1_conv[0][0]']             )                                                                 pool1_pool (MaxPooling2D)      (None, 56, 56, 64)   0           ['pool1_pad[0][0]']              conv2_block1_preact_bn (BatchN  (None, 56, 56, 64)  256         ['pool1_pool[0][0]']             ormalization)                                                                                    conv2_block1_preact_relu (Acti  (None, 56, 56, 64)  0           ['conv2_block1_preact_bn[0][0]'] vation)                                                                                          conv2_block1_1_conv (Conv2D)   (None, 56, 56, 64)   4096        ['conv2_block1_preact_relu[0][0]']                                conv2_block1_1_bn (BatchNormal  (None, 56, 56, 64)  256         ['conv2_block1_1_conv[0][0]']    ization)                                                                                         conv2_block1_1_relu (Activatio  (None, 56, 56, 64)  0           ['conv2_block1_1_bn[0][0]']      n)                                                                                               conv2_block1_2_pad (ZeroPaddin  (None, 58, 58, 64)  0           ['conv2_block1_1_relu[0][0]']    g2D)                                                                                             conv2_block1_2_conv (Conv2D)   (None, 56, 56, 64)   36864       ['conv2_block1_2_pad[0][0]']     conv2_block1_2_bn (BatchNormal  (None, 56, 56, 64)  256         ['conv2_block1_2_conv[0][0]']    ization)                                                                                         conv2_block1_2_relu (Activatio  (None, 56, 56, 64)  0           ['conv2_block1_2_bn[0][0]']      n)                                                                                               conv2_block1_0_conv (Conv2D)   (None, 56, 56, 256)  16640       ['conv2_block1_preact_relu[0][0]']                                conv2_block1_3_conv (Conv2D)   (None, 56, 56, 256)  16640       ['conv2_block1_2_relu[0][0]']    conv2_block1_out (Add)         (None, 56, 56, 256)  0           ['conv2_block1_0_conv[0][0]',    'conv2_block1_3_conv[0][0]']    conv2_block2_preact_bn (BatchN  (None, 56, 56, 256)  1024       ['conv2_block1_out[0][0]']       ormalization)                                                                                    conv2_block2_preact_relu (Acti  (None, 56, 56, 256)  0          ['conv2_block2_preact_bn[0][0]'] vation)                                                                                          conv2_block2_1_conv (Conv2D)   (None, 56, 56, 64)   16384       ['conv2_block2_preact_relu[0][0]']                                conv2_block2_1_bn (BatchNormal  (None, 56, 56, 64)  256         ['conv2_block2_1_conv[0][0]']    ization)                                                                                         conv2_block2_1_relu (Activatio  (None, 56, 56, 64)  0           ['conv2_block2_1_bn[0][0]']      n)                                                                                               conv2_block2_2_pad (ZeroPaddin  (None, 58, 58, 64)  0           ['conv2_block2_1_relu[0][0]']    g2D)                                                                                             conv2_block2_2_conv (Conv2D)   (None, 56, 56, 64)   36864       ['conv2_block2_2_pad[0][0]']     conv2_block2_2_bn (BatchNormal  (None, 56, 56, 64)  256         ['conv2_block2_2_conv[0][0]']    ization)                                                                                         conv2_block2_2_relu (Activatio  (None, 56, 56, 64)  0           ['conv2_block2_2_bn[0][0]']      n)                                                                                               conv2_block2_3_conv (Conv2D)   (None, 56, 56, 256)  16640       ['conv2_block2_2_relu[0][0]']    conv2_block2_out (Add)         (None, 56, 56, 256)  0           ['conv2_block1_out[0][0]',       'conv2_block2_3_conv[0][0]']    conv2_block3_preact_bn (BatchN  (None, 56, 56, 256)  1024       ['conv2_block2_out[0][0]']       ormalization)                                                                                    conv2_block3_preact_relu (Acti  (None, 56, 56, 256)  0          ['conv2_block3_preact_bn[0][0]'] vation)                                                                                          conv2_block3_1_conv (Conv2D)   (None, 56, 56, 64)   16384       ['conv2_block3_preact_relu[0][0]']                                conv2_block3_1_bn (BatchNormal  (None, 56, 56, 64)  256         ['conv2_block3_1_conv[0][0]']    ization)                                                                                         conv2_block3_1_relu (Activatio  (None, 56, 56, 64)  0           ['conv2_block3_1_bn[0][0]']      n)                                                                                               conv2_block3_2_pad (ZeroPaddin  (None, 58, 58, 64)  0           ['conv2_block3_1_relu[0][0]']    g2D)                                                                                             conv2_block3_2_conv (Conv2D)   (None, 28, 28, 64)   36864       ['conv2_block3_2_pad[0][0]']     conv2_block3_2_bn (BatchNormal  (None, 28, 28, 64)  256         ['conv2_block3_2_conv[0][0]']    ization)                                                                                         conv2_block3_2_relu (Activatio  (None, 28, 28, 64)  0           ['conv2_block3_2_bn[0][0]']      n)                                                                                               max_pooling2d_3 (MaxPooling2D)  (None, 28, 28, 256)  0          ['conv2_block2_out[0][0]']       conv2_block3_3_conv (Conv2D)   (None, 28, 28, 256)  16640       ['conv2_block3_2_relu[0][0]']    conv2_block3_out (Add)         (None, 28, 28, 256)  0           ['max_pooling2d_3[0][0]',        'conv2_block3_3_conv[0][0]']    conv3_block1_preact_bn (BatchN  (None, 28, 28, 256)  1024       ['conv2_block3_out[0][0]']       ormalization)                                                                                    conv3_block1_preact_relu (Acti  (None, 28, 28, 256)  0          ['conv3_block1_preact_bn[0][0]'] vation)                                                                                          conv3_block1_1_conv (Conv2D)   (None, 28, 28, 128)  32768       ['conv3_block1_preact_relu[0][0]']                                conv3_block1_1_bn (BatchNormal  (None, 28, 28, 128)  512        ['conv3_block1_1_conv[0][0]']    ization)                                                                                         conv3_block1_1_relu (Activatio  (None, 28, 28, 128)  0          ['conv3_block1_1_bn[0][0]']      n)                                                                                               conv3_block1_2_pad (ZeroPaddin  (None, 30, 30, 128)  0          ['conv3_block1_1_relu[0][0]']    g2D)                                                                                             conv3_block1_2_conv (Conv2D)   (None, 28, 28, 128)  147456      ['conv3_block1_2_pad[0][0]']     conv3_block1_2_bn (BatchNormal  (None, 28, 28, 128)  512        ['conv3_block1_2_conv[0][0]']    ization)                                                                                         conv3_block1_2_relu (Activatio  (None, 28, 28, 128)  0          ['conv3_block1_2_bn[0][0]']      n)                                                                                               conv3_block1_0_conv (Conv2D)   (None, 28, 28, 512)  131584      ['conv3_block1_preact_relu[0][0]']                                conv3_block1_3_conv (Conv2D)   (None, 28, 28, 512)  66048       ['conv3_block1_2_relu[0][0]']    conv3_block1_out (Add)         (None, 28, 28, 512)  0           ['conv3_block1_0_conv[0][0]',    'conv3_block1_3_conv[0][0]']    conv3_block2_preact_bn (BatchN  (None, 28, 28, 512)  2048       ['conv3_block1_out[0][0]']       ormalization)                                                                                    conv3_block2_preact_relu (Acti  (None, 28, 28, 512)  0          ['conv3_block2_preact_bn[0][0]'] vation)                                                                                          conv3_block2_1_conv (Conv2D)   (None, 28, 28, 128)  65536       ['conv3_block2_preact_relu[0][0]']                                conv3_block2_1_bn (BatchNormal  (None, 28, 28, 128)  512        ['conv3_block2_1_conv[0][0]']    ization)                                                                                         conv3_block2_1_relu (Activatio  (None, 28, 28, 128)  0          ['conv3_block2_1_bn[0][0]']      n)                                                                                               conv3_block2_2_pad (ZeroPaddin  (None, 30, 30, 128)  0          ['conv3_block2_1_relu[0][0]']    g2D)                                                                                             conv3_block2_2_conv (Conv2D)   (None, 28, 28, 128)  147456      ['conv3_block2_2_pad[0][0]']     conv3_block2_2_bn (BatchNormal  (None, 28, 28, 128)  512        ['conv3_block2_2_conv[0][0]']    ization)                                                                                         conv3_block2_2_relu (Activatio  (None, 28, 28, 128)  0          ['conv3_block2_2_bn[0][0]']      n)                                                                                               conv3_block2_3_conv (Conv2D)   (None, 28, 28, 512)  66048       ['conv3_block2_2_relu[0][0]']    conv3_block2_out (Add)         (None, 28, 28, 512)  0           ['conv3_block1_out[0][0]',       'conv3_block2_3_conv[0][0]']    conv3_block3_preact_bn (BatchN  (None, 28, 28, 512)  2048       ['conv3_block2_out[0][0]']       ormalization)                                                                                    conv3_block3_preact_relu (Acti  (None, 28, 28, 512)  0          ['conv3_block3_preact_bn[0][0]'] vation)                                                                                          conv3_block3_1_conv (Conv2D)   (None, 28, 28, 128)  65536       ['conv3_block3_preact_relu[0][0]']                                conv3_block3_1_bn (BatchNormal  (None, 28, 28, 128)  512        ['conv3_block3_1_conv[0][0]']    ization)                                                                                         conv3_block3_1_relu (Activatio  (None, 28, 28, 128)  0          ['conv3_block3_1_bn[0][0]']      n)                                                                                               conv3_block3_2_pad (ZeroPaddin  (None, 30, 30, 128)  0          ['conv3_block3_1_relu[0][0]']    g2D)                                                                                             conv3_block3_2_conv (Conv2D)   (None, 28, 28, 128)  147456      ['conv3_block3_2_pad[0][0]']     conv3_block3_2_bn (BatchNormal  (None, 28, 28, 128)  512        ['conv3_block3_2_conv[0][0]']    ization)                                                                                         conv3_block3_2_relu (Activatio  (None, 28, 28, 128)  0          ['conv3_block3_2_bn[0][0]']      n)                                                                                               conv3_block3_3_conv (Conv2D)   (None, 28, 28, 512)  66048       ['conv3_block3_2_relu[0][0]']    conv3_block3_out (Add)         (None, 28, 28, 512)  0           ['conv3_block2_out[0][0]',       'conv3_block3_3_conv[0][0]']    conv3_block4_preact_bn (BatchN  (None, 28, 28, 512)  2048       ['conv3_block3_out[0][0]']       ormalization)                                                                                    conv3_block4_preact_relu (Acti  (None, 28, 28, 512)  0          ['conv3_block4_preact_bn[0][0]'] vation)                                                                                          conv3_block4_1_conv (Conv2D)   (None, 28, 28, 128)  65536       ['conv3_block4_preact_relu[0][0]']                                conv3_block4_1_bn (BatchNormal  (None, 28, 28, 128)  512        ['conv3_block4_1_conv[0][0]']    ization)                                                                                         conv3_block4_1_relu (Activatio  (None, 28, 28, 128)  0          ['conv3_block4_1_bn[0][0]']      n)                                                                                               conv3_block4_2_pad (ZeroPaddin  (None, 30, 30, 128)  0          ['conv3_block4_1_relu[0][0]']    g2D)                                                                                             conv3_block4_2_conv (Conv2D)   (None, 14, 14, 128)  147456      ['conv3_block4_2_pad[0][0]']     conv3_block4_2_bn (BatchNormal  (None, 14, 14, 128)  512        ['conv3_block4_2_conv[0][0]']    ization)                                                                                         conv3_block4_2_relu (Activatio  (None, 14, 14, 128)  0          ['conv3_block4_2_bn[0][0]']      n)                                                                                               max_pooling2d_4 (MaxPooling2D)  (None, 14, 14, 512)  0          ['conv3_block3_out[0][0]']       conv3_block4_3_conv (Conv2D)   (None, 14, 14, 512)  66048       ['conv3_block4_2_relu[0][0]']    conv3_block4_out (Add)         (None, 14, 14, 512)  0           ['max_pooling2d_4[0][0]',        'conv3_block4_3_conv[0][0]']    conv4_block1_preact_bn (BatchN  (None, 14, 14, 512)  2048       ['conv3_block4_out[0][0]']       ormalization)                                                                                    conv4_block1_preact_relu (Acti  (None, 14, 14, 512)  0          ['conv4_block1_preact_bn[0][0]'] vation)                                                                                          conv4_block1_1_conv (Conv2D)   (None, 14, 14, 256)  131072      ['conv4_block1_preact_relu[0][0]']                                conv4_block1_1_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block1_1_conv[0][0]']    ization)                                                                                         conv4_block1_1_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block1_1_bn[0][0]']      n)                                                                                               conv4_block1_2_pad (ZeroPaddin  (None, 16, 16, 256)  0          ['conv4_block1_1_relu[0][0]']    g2D)                                                                                             conv4_block1_2_conv (Conv2D)   (None, 14, 14, 256)  589824      ['conv4_block1_2_pad[0][0]']     conv4_block1_2_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block1_2_conv[0][0]']    ization)                                                                                         conv4_block1_2_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block1_2_bn[0][0]']      n)                                                                                               conv4_block1_0_conv (Conv2D)   (None, 14, 14, 1024  525312      ['conv4_block1_preact_relu[0][0]')                                ]                                conv4_block1_3_conv (Conv2D)   (None, 14, 14, 1024  263168      ['conv4_block1_2_relu[0][0]']    )                                                                 conv4_block1_out (Add)         (None, 14, 14, 1024  0           ['conv4_block1_0_conv[0][0]',    )                                 'conv4_block1_3_conv[0][0]']    conv4_block2_preact_bn (BatchN  (None, 14, 14, 1024  4096       ['conv4_block1_out[0][0]']       ormalization)                  )                                                                 conv4_block2_preact_relu (Acti  (None, 14, 14, 1024  0          ['conv4_block2_preact_bn[0][0]'] vation)                        )                                                                 conv4_block2_1_conv (Conv2D)   (None, 14, 14, 256)  262144      ['conv4_block2_preact_relu[0][0]']                                conv4_block2_1_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block2_1_conv[0][0]']    ization)                                                                                         conv4_block2_1_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block2_1_bn[0][0]']      n)                                                                                               conv4_block2_2_pad (ZeroPaddin  (None, 16, 16, 256)  0          ['conv4_block2_1_relu[0][0]']    g2D)                                                                                             conv4_block2_2_conv (Conv2D)   (None, 14, 14, 256)  589824      ['conv4_block2_2_pad[0][0]']     conv4_block2_2_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block2_2_conv[0][0]']    ization)                                                                                         conv4_block2_2_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block2_2_bn[0][0]']      n)                                                                                               conv4_block2_3_conv (Conv2D)   (None, 14, 14, 1024  263168      ['conv4_block2_2_relu[0][0]']    )                                                                 conv4_block2_out (Add)         (None, 14, 14, 1024  0           ['conv4_block1_out[0][0]',       )                                 'conv4_block2_3_conv[0][0]']    conv4_block3_preact_bn (BatchN  (None, 14, 14, 1024  4096       ['conv4_block2_out[0][0]']       ormalization)                  )                                                                 conv4_block3_preact_relu (Acti  (None, 14, 14, 1024  0          ['conv4_block3_preact_bn[0][0]'] vation)                        )                                                                 conv4_block3_1_conv (Conv2D)   (None, 14, 14, 256)  262144      ['conv4_block3_preact_relu[0][0]']                                conv4_block3_1_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block3_1_conv[0][0]']    ization)                                                                                         conv4_block3_1_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block3_1_bn[0][0]']      n)                                                                                               conv4_block3_2_pad (ZeroPaddin  (None, 16, 16, 256)  0          ['conv4_block3_1_relu[0][0]']    g2D)                                                                                             conv4_block3_2_conv (Conv2D)   (None, 14, 14, 256)  589824      ['conv4_block3_2_pad[0][0]']     conv4_block3_2_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block3_2_conv[0][0]']    ization)                                                                                         conv4_block3_2_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block3_2_bn[0][0]']      n)                                                                                               conv4_block3_3_conv (Conv2D)   (None, 14, 14, 1024  263168      ['conv4_block3_2_relu[0][0]']    )                                                                 conv4_block3_out (Add)         (None, 14, 14, 1024  0           ['conv4_block2_out[0][0]',       )                                 'conv4_block3_3_conv[0][0]']    conv4_block4_preact_bn (BatchN  (None, 14, 14, 1024  4096       ['conv4_block3_out[0][0]']       ormalization)                  )                                                                 conv4_block4_preact_relu (Acti  (None, 14, 14, 1024  0          ['conv4_block4_preact_bn[0][0]'] vation)                        )                                                                 conv4_block4_1_conv (Conv2D)   (None, 14, 14, 256)  262144      ['conv4_block4_preact_relu[0][0]']                                conv4_block4_1_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block4_1_conv[0][0]']    ization)                                                                                         conv4_block4_1_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block4_1_bn[0][0]']      n)                                                                                               conv4_block4_2_pad (ZeroPaddin  (None, 16, 16, 256)  0          ['conv4_block4_1_relu[0][0]']    g2D)                                                                                             conv4_block4_2_conv (Conv2D)   (None, 14, 14, 256)  589824      ['conv4_block4_2_pad[0][0]']     conv4_block4_2_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block4_2_conv[0][0]']    ization)                                                                                         conv4_block4_2_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block4_2_bn[0][0]']      n)                                                                                               conv4_block4_3_conv (Conv2D)   (None, 14, 14, 1024  263168      ['conv4_block4_2_relu[0][0]']    )                                                                 conv4_block4_out (Add)         (None, 14, 14, 1024  0           ['conv4_block3_out[0][0]',       )                                 'conv4_block4_3_conv[0][0]']    conv4_block5_preact_bn (BatchN  (None, 14, 14, 1024  4096       ['conv4_block4_out[0][0]']       ormalization)                  )                                                                 conv4_block5_preact_relu (Acti  (None, 14, 14, 1024  0          ['conv4_block5_preact_bn[0][0]'] vation)                        )                                                                 conv4_block5_1_conv (Conv2D)   (None, 14, 14, 256)  262144      ['conv4_block5_preact_relu[0][0]']                                conv4_block5_1_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block5_1_conv[0][0]']    ization)                                                                                         conv4_block5_1_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block5_1_bn[0][0]']      n)                                                                                               conv4_block5_2_pad (ZeroPaddin  (None, 16, 16, 256)  0          ['conv4_block5_1_relu[0][0]']    g2D)                                                                                             conv4_block5_2_conv (Conv2D)   (None, 14, 14, 256)  589824      ['conv4_block5_2_pad[0][0]']     conv4_block5_2_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block5_2_conv[0][0]']    ization)                                                                                         conv4_block5_2_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block5_2_bn[0][0]']      n)                                                                                               conv4_block5_3_conv (Conv2D)   (None, 14, 14, 1024  263168      ['conv4_block5_2_relu[0][0]']    )                                                                 conv4_block5_out (Add)         (None, 14, 14, 1024  0           ['conv4_block4_out[0][0]',       )                                 'conv4_block5_3_conv[0][0]']    conv4_block6_preact_bn (BatchN  (None, 14, 14, 1024  4096       ['conv4_block5_out[0][0]']       ormalization)                  )                                                                 conv4_block6_preact_relu (Acti  (None, 14, 14, 1024  0          ['conv4_block6_preact_bn[0][0]'] vation)                        )                                                                 conv4_block6_1_conv (Conv2D)   (None, 14, 14, 256)  262144      ['conv4_block6_preact_relu[0][0]']                                conv4_block6_1_bn (BatchNormal  (None, 14, 14, 256)  1024       ['conv4_block6_1_conv[0][0]']    ization)                                                                                         conv4_block6_1_relu (Activatio  (None, 14, 14, 256)  0          ['conv4_block6_1_bn[0][0]']      n)                                                                                               conv4_block6_2_pad (ZeroPaddin  (None, 16, 16, 256)  0          ['conv4_block6_1_relu[0][0]']    g2D)                                                                                             conv4_block6_2_conv (Conv2D)   (None, 7, 7, 256)    589824      ['conv4_block6_2_pad[0][0]']     conv4_block6_2_bn (BatchNormal  (None, 7, 7, 256)   1024        ['conv4_block6_2_conv[0][0]']    ization)                                                                                         conv4_block6_2_relu (Activatio  (None, 7, 7, 256)   0           ['conv4_block6_2_bn[0][0]']      n)                                                                                               max_pooling2d_5 (MaxPooling2D)  (None, 7, 7, 1024)  0           ['conv4_block5_out[0][0]']       conv4_block6_3_conv (Conv2D)   (None, 7, 7, 1024)   263168      ['conv4_block6_2_relu[0][0]']    conv4_block6_out (Add)         (None, 7, 7, 1024)   0           ['max_pooling2d_5[0][0]',        'conv4_block6_3_conv[0][0]']    conv5_block1_preact_bn (BatchN  (None, 7, 7, 1024)  4096        ['conv4_block6_out[0][0]']       ormalization)                                                                                    conv5_block1_preact_relu (Acti  (None, 7, 7, 1024)  0           ['conv5_block1_preact_bn[0][0]'] vation)                                                                                          conv5_block1_1_conv (Conv2D)   (None, 7, 7, 512)    524288      ['conv5_block1_preact_relu[0][0]']                                conv5_block1_1_bn (BatchNormal  (None, 7, 7, 512)   2048        ['conv5_block1_1_conv[0][0]']    ization)                                                                                         conv5_block1_1_relu (Activatio  (None, 7, 7, 512)   0           ['conv5_block1_1_bn[0][0]']      n)                                                                                               conv5_block1_2_pad (ZeroPaddin  (None, 9, 9, 512)   0           ['conv5_block1_1_relu[0][0]']    g2D)                                                                                             conv5_block1_2_conv (Conv2D)   (None, 7, 7, 512)    2359296     ['conv5_block1_2_pad[0][0]']     conv5_block1_2_bn (BatchNormal  (None, 7, 7, 512)   2048        ['conv5_block1_2_conv[0][0]']    ization)                                                                                         conv5_block1_2_relu (Activatio  (None, 7, 7, 512)   0           ['conv5_block1_2_bn[0][0]']      n)                                                                                               conv5_block1_0_conv (Conv2D)   (None, 7, 7, 2048)   2099200     ['conv5_block1_preact_relu[0][0]']                                conv5_block1_3_conv (Conv2D)   (None, 7, 7, 2048)   1050624     ['conv5_block1_2_relu[0][0]']    conv5_block1_out (Add)         (None, 7, 7, 2048)   0           ['conv5_block1_0_conv[0][0]',    'conv5_block1_3_conv[0][0]']    conv5_block2_preact_bn (BatchN  (None, 7, 7, 2048)  8192        ['conv5_block1_out[0][0]']       ormalization)                                                                                    conv5_block2_preact_relu (Acti  (None, 7, 7, 2048)  0           ['conv5_block2_preact_bn[0][0]'] vation)                                                                                          conv5_block2_1_conv (Conv2D)   (None, 7, 7, 512)    1048576     ['conv5_block2_preact_relu[0][0]']                                conv5_block2_1_bn (BatchNormal  (None, 7, 7, 512)   2048        ['conv5_block2_1_conv[0][0]']    ization)                                                                                         conv5_block2_1_relu (Activatio  (None, 7, 7, 512)   0           ['conv5_block2_1_bn[0][0]']      n)                                                                                               conv5_block2_2_pad (ZeroPaddin  (None, 9, 9, 512)   0           ['conv5_block2_1_relu[0][0]']    g2D)                                                                                             conv5_block2_2_conv (Conv2D)   (None, 7, 7, 512)    2359296     ['conv5_block2_2_pad[0][0]']     conv5_block2_2_bn (BatchNormal  (None, 7, 7, 512)   2048        ['conv5_block2_2_conv[0][0]']    ization)                                                                                         conv5_block2_2_relu (Activatio  (None, 7, 7, 512)   0           ['conv5_block2_2_bn[0][0]']      n)                                                                                               conv5_block2_3_conv (Conv2D)   (None, 7, 7, 2048)   1050624     ['conv5_block2_2_relu[0][0]']    conv5_block2_out (Add)         (None, 7, 7, 2048)   0           ['conv5_block1_out[0][0]',       'conv5_block2_3_conv[0][0]']    conv5_block3_preact_bn (BatchN  (None, 7, 7, 2048)  8192        ['conv5_block2_out[0][0]']       ormalization)                                                                                    conv5_block3_preact_relu (Acti  (None, 7, 7, 2048)  0           ['conv5_block3_preact_bn[0][0]'] vation)                                                                                          conv5_block3_1_conv (Conv2D)   (None, 7, 7, 512)    1048576     ['conv5_block3_preact_relu[0][0]']                                conv5_block3_1_bn (BatchNormal  (None, 7, 7, 512)   2048        ['conv5_block3_1_conv[0][0]']    ization)                                                                                         conv5_block3_1_relu (Activatio  (None, 7, 7, 512)   0           ['conv5_block3_1_bn[0][0]']      n)                                                                                               conv5_block3_2_pad (ZeroPaddin  (None, 9, 9, 512)   0           ['conv5_block3_1_relu[0][0]']    g2D)                                                                                             conv5_block3_2_conv (Conv2D)   (None, 7, 7, 512)    2359296     ['conv5_block3_2_pad[0][0]']     conv5_block3_2_bn (BatchNormal  (None, 7, 7, 512)   2048        ['conv5_block3_2_conv[0][0]']    ization)                                                                                         conv5_block3_2_relu (Activatio  (None, 7, 7, 512)   0           ['conv5_block3_2_bn[0][0]']      n)                                                                                               conv5_block3_3_conv (Conv2D)   (None, 7, 7, 2048)   1050624     ['conv5_block3_2_relu[0][0]']    conv5_block3_out (Add)         (None, 7, 7, 2048)   0           ['conv5_block2_out[0][0]',       'conv5_block3_3_conv[0][0]']    post_bn (BatchNormalization)   (None, 7, 7, 2048)   8192        ['conv5_block3_out[0][0]']       post_relu (Activation)         (None, 7, 7, 2048)   0           ['post_bn[0][0]']                avg_pool (GlobalAveragePooling  (None, 2048)        0           ['post_relu[0][0]']              2D)                                                                                              predictions (Dense)            (None, 1000)         2049000     ['avg_pool[0][0]']               ==================================================================================================
Total params: 25,613,800
Trainable params: 25,568,360
Non-trainable params: 45,440
__________________________________________________________________________________________________
model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy'])

五、训练模型

epochs = 10history = model.fit(train_ds,validation_data=val_ds,epochs=epochs
)
Epoch 1/102025-02-14 10:51:34.188825: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype int32 and shape [452][[{{node Placeholder/_4}}]]
2025-02-14 10:51:34.189120: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype string and shape [452][[{{node Placeholder/_0}}]]57/57 [==============================] - ETA: 0s - loss: 1.4427 - accuracy: 0.53102025-02-14 10:52:07.689698: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype int32 and shape [113][[{{node Placeholder/_4}}]]
2025-02-14 10:52:07.689827: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype string and shape [113][[{{node Placeholder/_0}}]]57/57 [==============================] - 36s 613ms/step - loss: 1.4427 - accuracy: 0.5310 - val_loss: 994.0645 - val_accuracy: 0.3186
Epoch 2/10
57/57 [==============================] - 34s 599ms/step - loss: 0.8882 - accuracy: 0.6482 - val_loss: 16.8545 - val_accuracy: 0.3805
Epoch 3/10
57/57 [==============================] - 34s 599ms/step - loss: 0.8002 - accuracy: 0.6925 - val_loss: 4.0983 - val_accuracy: 0.4602
Epoch 4/10
57/57 [==============================] - 34s 599ms/step - loss: 0.6916 - accuracy: 0.7412 - val_loss: 4.0794 - val_accuracy: 0.3540
Epoch 5/10
57/57 [==============================] - 34s 600ms/step - loss: 0.6951 - accuracy: 0.7456 - val_loss: 83.9201 - val_accuracy: 0.3186
Epoch 6/10
57/57 [==============================] - 34s 604ms/step - loss: 0.6024 - accuracy: 0.7655 - val_loss: 3.4875 - val_accuracy: 0.4425
Epoch 7/10
57/57 [==============================] - 34s 600ms/step - loss: 0.5214 - accuracy: 0.7965 - val_loss: 1.2221 - val_accuracy: 0.6991
Epoch 8/10
57/57 [==============================] - 35s 623ms/step - loss: 0.4850 - accuracy: 0.8142 - val_loss: 1.4684 - val_accuracy: 0.4867
Epoch 9/10
57/57 [==============================] - 35s 613ms/step - loss: 0.4335 - accuracy: 0.8252 - val_loss: 1.5076 - val_accuracy: 0.6814
Epoch 10/10
57/57 [==============================] - 35s 615ms/step - loss: 0.3424 - accuracy: 0.8850 - val_loss: 1.7519 - val_accuracy: 0.6903

六、结果可视化

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(epochs)plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

总结

本周项目手动搭建了手动搭建了ResNet50V2模型,与ResNet50模型相比,残差模型将BN和ReLU进行了前置,在一定程度上有效地提升了模型的准确率。但是模型效果不是很理想,后续需要进一步优化。

相关文章:

第J2周:ResNet50V2算法实战与解析

文章目录 一、准备工作1.设置GPU2.导入数据3.查看数据 二、数据预处理1.加载数据2.可视化数据 总结 &#x1f368; 本文为&#x1f517;365天深度学习训练营 中的学习记录博客&#x1f356; 原作者&#xff1a;K同学啊 一、准备工作 1.设置GPU import tensorflow as tf gpus …...

如何使用 HPjtune 分析 Java GC 日志并优化 JVM 性能

HPjtune 是一款用于分析 Java 应用程序垃圾回收&#xff08;GC&#xff09;日志的工具&#xff0c;主要用于优化 JVM 性能。虽然 HPjtune 本身并不直接生成 HTML 格式的报告&#xff0c;但可以通过结合其他工具或方法将分析结果导出为 HTML 格式。以下是实现这一目标的步骤和方…...

【MySQL在Centos 7环境安装】

文章目录 一. 卸载不必要的环境二. 检查系统安装包三. 卸载这些默认安装包四. 获取mysql官⽅yum源五. 安装mysql yum 源&#xff0c;对⽐前后yum源六. 看看能不能正常⼯作七. 安装mysql服务八. .查看配置⽂件和数据存储位置九. 启动服务并查看服务是否存在十. 登陆⽅法十一. 设…...

PostgreSQL技术内幕25:时序数据库插件TimescaleDB

文章目录 0.简介1.基础知识1.1 背景1.2 概念1.3 特点 2.TimescaleDB2.1 安装使用2.1 文件结构2.2 原理2.2.1 整体结构2.2.2 超表2.2.3 自动分区2.2.4 数据写入与查询优化2.2.5 数据保留策略2.2.6 更多特性 0.简介 现今时序数据库的应用场景十分广泛&#xff0c;其通过保留时间…...

Flask Web开发的重要概念和示例

一口气列举Flask Web应用的所有概念和示例 Flask Web 应用基本框架 路由(Routing) 模版(Template) request 对象 JSON 数据处理 redirect 示例 文件上传示例 文件下载示例 Session 示例 Cookie操作 Flask Web 应用基本框架 这是一个 最基础的 Flask Web 应用,…...

使用pocketpal-ai在手机上搭建本地AI聊天环境

1、下载安装pocketpal-ai 安装github的release APK 2、安装大模型 搜索并下载模型&#xff0c;没找到deepseek官方的&#xff0c;因为海外的开发者上传了一堆乱七八糟的deepseek qwen模型&#xff0c;导致根本找不到官方上传的……deepseek一开源他们觉得自己又行了。 点击之…...

后台终端方法

使用tmux实现终端后台运行 首先&#xff0c;在Linux系统上安装tmux sudo apt install tmux使用方法&#xff1a; 创建终端 #直接创建 tmux #自定义名称 tmux new -s {name}退出tmux终端&#xff1a;Ctrlb 之后 d 退出后查看后台终端&#xff1a; tmux ls abc: 1 windows (cr…...

为什么vue3需要对引入的组件使用markRaw?

在Vue 3中&#xff0c;对引入的组件使用markRaw的主要原因是为了避免Vue的响应式系统对该组件实例进行不必要的代理和追踪。Vue 3的响应式系统是基于Proxy实现的&#xff0c;它会对数据进行代理以追踪其变化&#xff0c;并在数据变化时自动更新视图。然而&#xff0c;在某些情况…...

AWS上基于Llama 3模型检测Amazon Redshift里文本数据的语法和语义错误的设计方案

一、技术栈选型 核心服务&#xff1a; Amazon Redshift&#xff1a;存储原始文本和检测结果Amazon Bedrock&#xff1a;托管Llama 3 70B模型AWS Lambda&#xff1a;无服务计算&#xff08;Python运行时&#xff09;Amazon S3&#xff1a;中间数据存储AWS Step Functions&…...

深度学习-114-大语言模型应用之提示词指南实例DeepSeek使用手册(三)

文章目录 1 提示词指南1.1 生成模型提示词1.2 角色扮演1.3 文案大纲生成1.4 情景续写1.5 宣传标语生成1.6 中英翻译专家1.7 诗歌创作1.8 结构化输出1.9 内容分类1.10 散文写作1.11 代码生成1.12 代码改写1.13 代码解释2 不同类型的提示词2.1 营销推广类(5个)2.2 内容创作类(24个…...

Springboot_实战

项目开发 lombok使用 自动为实体类提供get、set、toString方法 引入依赖 实体类上添加注解 统一响应结果 注意要写get、set方法&#xff1b;下面是错误的&#xff0c;因此要加上Data注解 一个注册的接口的示例 Controller层 Service层 Mapper层 参数校验 但是同样存在一…...

【第5章:深度生成模型— 5.4 深度生成模型前沿全景:从Diffusion到多模态,揭秘AI生成技术的未来】

生成模型正在经历一场前所未有的革命!从震惊AI圈的DALLE 2到刷屏朋友圈的Stable Diffusion,这些模型展现出的创造力已经突破了我们的想象边界。今天,我们就来一场深度探索之旅,揭开生成模型最前沿研究的神秘面纱,看看AI生成技术的未来会走向何方。 (本文包含大量前沿技术…...

【微服务学习二】nacos服务发现与负载均衡

nacos服务发现 想要开启服务发现&#xff0c;需要在main函数上添加 EnableDiscoveryClient 注解 然后我们编写一个controller类来查询nacos中注册的所有微服务以及对应的ip端口号 Controller public class DiscoveryController {AutowiredDiscoveryClient discoveryClient;//…...

信息安全管理(3):网络安全

1 网络的定义和特征 1.1 网络的定义 &#xff08;根本懒得说。。你们自己wiki吧&#xff09; 网络的用处 What is a network…Devices in a network…LAN, WAN and InternetworksWhat do networks do for you… Sharing resourcesUse/share applications 1.2 网络的特征 C…...

如何设置linux系统时间?

在 Linux 系统中&#xff0c;可以通过不同的方法来设置系统时间&#xff0c;下面详细介绍几种常见的方式。 目录 方法一&#xff1a;使用date命令手动设置时间 方法二&#xff1a;同步硬件时钟&#xff08;BIOS 时钟&#xff09; 方法三&#xff1a;使用timedatectl命令设置…...

ceph部署-14版本(nautilus)-使用ceph-ansible部署实验记录

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 前言一、环境信息二、部署步骤2.1 基础环境准备2.2 各节点docker环境安装2.3 搭建互信集群2.4 下载ceph-ansible 三、配置部署文件3.1 使用本地docker3.2 配置hosts…...

几款C#开发的入门书籍与视频教程

以下是几本适合C#初学者的书籍和一些优质的视频教程推荐&#xff0c;帮助你快速入门C#开发&#xff1a; 书籍推荐 1. 《C#入门经典》 • 作者&#xff1a;Karli Watson, Christian Nagel 等 • 特点&#xff1a;经典的C#入门书籍&#xff0c;内容全面&#xff0c;从基础语法到…...

XZ_Mac电脑上本地化部署DeepSeek的详细步骤

根据您的需求&#xff0c;以下是Mac电脑上本地化部署DeepSeek的详细步骤&#xff1a; 一、下载并安装Ollama 访问Ollama官网&#xff1a; 打开浏览器&#xff0c;访问 Ollama官网。 下载Ollama&#xff1a; 在官网中找到并点击“Download”按钮&#xff0c;选择适合Mac系统的…...

el-input输入框样式修改

el-input输入框样式修改 目的&#xff1a;蓝色边框去掉、右下角黑色去掉(可能看不清楚) 之前我试过deep不行 最有效的办法就是就是在底部添加一下css文件 代码中针对input的type为textarea&#xff0c;对于非textarea&#xff0c;只需将下面的css样式中的textarea替换成input…...

Promise的三种状态

目录 代码示例 HTML JavaScript 代码&#xff1a; 代码解释 总结 在 JavaScript 中&#xff0c;Promise 是一种异步编程的解决方案&#xff0c;它用于表示异步操作的最终完成&#xff08;或失败&#xff09;及其结果值。Promise 主要有三种状态&#xff1a; Pending&#…...

探秘AES加密算法:多种Transformation全解析

&#x1f9d1; 博主简介&#xff1a;CSDN博客专家&#xff0c;历代文学网&#xff08;PC端可以访问&#xff1a;https://literature.sinhy.com/#/literature?__c1000&#xff0c;移动端可微信小程序搜索“历代文学”&#xff09;总架构师&#xff0c;15年工作经验&#xff0c;…...

Python深度学习代做目标检测NLP计算机视觉强化学习

了解您的需求&#xff0c;您似乎在寻找关于Python深度学习领域的代做服务&#xff0c;特别是在目标检测、自然语言处理&#xff08;NLP&#xff09;、计算机视觉以及强化学习方面。以下是一些关于这些领域的概述以及寻找相关服务的建议。 1. Python深度学习代做概述 目标检测&…...

10款视频无损压缩软件介绍(deepseek汇总)

在如今这个视频创作与分享盛行的时代&#xff0c;大家时常面临视频文件过大、占空间多、传输不便的困扰。无损压缩软件就能帮上大忙&#xff0c;既能缩减视频体积&#xff0c;又能保证画质不受损。下面就给大家详细介绍 10 款好用的视频无损压缩软件。 视频无损压缩工具一&…...

rv1103b编译opencv

opencv-3.4.16&#xff0c;png的neon会报错&#xff0c;如果想开可以参考 https://blog.csdn.net/m0_60827485/article/details/137561429 rm -rf build mkdir build cd build cmake -DCMAKE_BUILD_TYPERELEASE \ -DCMAKE_C_COMPILERxxx/arm-rockchip831-linux-uclibcgnueabih…...

细胞计数专题 | LUNA-FX7™新自动对焦算法提高极低细胞浓度下的细胞计数准确性

现代细胞计数仪采用自动化方法&#xff0c;在特定浓度范围内进行细胞计数。其上限受限于在高浓度条件下准确区分细胞边界的能力&#xff0c;而相机视野等因素则决定了下限。在图像中仅包含少量可识别细胞或特征的情况下&#xff0c;自动对焦可能会失效&#xff0c;从而影响细胞…...

C++ 中的继承与派生

在 C 中&#xff0c;继承与派生是面向对象编程的重要特性&#xff0c;它们允许创建新类&#xff08;派生类&#xff09;来复用现有类&#xff08;基类&#xff09;的属性和方法&#xff0c;同时还能添加新的功能或修改现有功能&#xff0c;下面为你详细介绍。 基本概念 继承&…...

数据结构:哈夫曼树

1.概念 哈夫曼树&#xff08;Huffman Tree&#xff09;是一种用于数据压缩的二叉树&#xff0c;由大卫哈夫曼&#xff08;David A. Huffman&#xff09;于1952年提出。它通过构建最优二叉树来实现数据的高效压缩&#xff0c;广泛应用于文件压缩、图像压缩等领域。 哈夫曼树的…...

2025年 Java 面试八股文

第一章-Java基础篇 1. Java中的基本数据类型有哪些&#xff1f;⭐ Java中有8种基本数据类型&#xff08;Primitive Types&#xff09;&#xff0c;分别是&#xff1a; byte&#xff1a;8位&#xff0c;-128 ~ 127short&#xff1a;16位&#xff0c;-32,768 ~ 32,767int&…...

Linux 内核 IPoIB 驱动中 sysfs 属性冲突问题的分析与解决

一、引言 在 Linux 内核的设备驱动开发中,sysfs 文件系统是一种重要的机制,用于向用户空间暴露内核对象的属性信息。通过 sysfs,用户空间程序可以读取或修改设备的属性,从而实现对硬件设备的监控和配置。然而,在实际开发中,可能会遇到 sysfs 属性冲突的问题,特别是在复…...

深度学习框架探秘|TensorFlow vs PyTorch:AI 框架的巅峰对决

在深度学习框架中&#xff0c;TensorFlow 和 PyTorch 无疑是两大明星框架。前面两篇文章我们分别介绍了 TensorFlow&#xff08;点击查看&#xff09; 和 PyTorch&#xff08;点击查看&#xff09;。它们引领着 AI 开发的潮流&#xff0c;吸引着无数开发者投身其中。但这两大框…...

鸿蒙Harmony-UIAbility内状态-LocalStorage详细介绍

鸿蒙Harmony-UIAbility内状态-LocalStorage详细介绍 1.1 Localstorage的概念 LocalStorage是页面级的UI状态存储&#xff0c;通过Entry装饰器接收的参数可以在页面内共享同一个LocalStorage实例&#xff0c;LocalStorage也可以在UIAbility内&#xff0c;页面间共享状态 1.2 Lo…...

Mysql中使用sql语句生成雪花算法Id

&#x1f353; 简介&#xff1a;java系列技术分享(&#x1f449;持续更新中…&#x1f525;) &#x1f353; 初衷:一起学习、一起进步、坚持不懈 &#x1f353; 如果文章内容有误与您的想法不一致,欢迎大家在评论区指正&#x1f64f; &#x1f353; 希望这篇文章对你有所帮助,欢…...

android的第一个app项目(java版)

一.学习java重要概念 java的基本类型的语言方法和C语言很像&#xff0c;这都是我们要学的东西和学过的东西。那些基础东西&#xff0c;就不和大家讨论了&#xff0c;一起看一下java的一些知识架构。 1.封装 封装是面向对象编程中的一个核心概念&#xff0c;它涉及到将数据和操…...

第6章 6.2使用ASP.NET Core 开发WebAPI ASP.NET Core Web API

6.2.1 Web API项目的搭建 进入VS&#xff0c;【创建新项目】&#xff0c;选择【ASP.NET Core Web API】模板&#xff0c;【下一步】&#xff0c;编辑项目名称及项目位置&#xff0c;【下一步】&#xff0c;选择框架&#xff0c;其他选项默认即可&#xff0c;【创建】。 进入项…...

常见的网络协议汇总(涵盖了不同的网络层次)

网络层协议 IP协议&#xff1a;IP指网际互连协议&#xff08;Internet Protocol&#xff09;&#xff0c;是TCP/IP体系中的网络层协议。IP协议包括IPv4和IPv6&#xff0c;用于为数据包提供源地址和目标地址&#xff0c;从而实现网络通信。ICMP协议&#xff1a;ICMP&#xff08…...

【Java 面试 八股文】Redis篇

Redis 1. 什么是缓存穿透&#xff1f;怎么解决&#xff1f;2. 你能介绍一下布隆过滤器吗&#xff1f;3. 什么是缓存击穿&#xff1f;怎么解决&#xff1f;4. 什么是缓存雪崩&#xff1f;怎么解决&#xff1f;5. redis做为缓存&#xff0c;mysql的数据如何与redis进行同步呢&…...

嵌入式EasyRTC实时通话支持海思hi3516cv610,编译器arm-v01c02-linux-musleabi-gcc

EasyRTC已经完美支持海思hi3516cv610&#xff0c;编译器arm-v01c02-linux-musleabi-gcc&#xff0c;总体SDK大小控制在680K以内&#xff08;预计还能压缩100K上下&#xff09;&#xff1a; EasyRTC在hi3516cv610芯片上能双向通话、发送文字以及二进制指令&#xff0c;总体运行…...

自然语言处理NLP入门 -- 第四节文本分类

目标 本章的目标是帮助你理解文本分类的基本概念&#xff0c;并通过具体示例学习如何使用 scikit-learn 训练文本分类模型&#xff0c;以及如何利用 OpenAI API 进行文本分类。 5.1 什么是文本分类&#xff1f; 文本分类&#xff08;Text Classification&#xff09;是自然语…...

深入解析:如何在C#和C/C++之间安全高效地通过P/Invoke传递多维数组

在工业控制、机器人编程和物联网等领域&#xff0c;我们经常需要让C#这样的托管语言与C/C编写的底层库进行交互。在这个过程中&#xff0c;遇到需要传递多维数组的场景时&#xff0c;许多开发者会意外遭遇System.Runtime.InteropServices.MarshalDirectiveException异常。本文将…...

Spring Boot全局异常处理终极指南:从青铜到王者的实战演进

一、为什么需要全局异常处理&#xff1f; 在用户中心这类核心服务中&#xff0c;优雅的异常处理是系统健壮性的生命线。未处理的异常会导致&#xff1a; 服务雪崩&#xff1a;单点异常扩散到整个系统&#xff08;✖️&#xff09;信息泄露&#xff1a;暴露敏感堆栈信息&#…...

【Elasticsearch】match_bool_prefix查询

match_bool_prefix查询是 Elasticsearch 中一种用于全文搜索的查询方式&#xff0c;适用于需要同时匹配多个词汇&#xff0c;但词汇顺序不固定的情况&#xff0c;它结合了布尔查询&#xff08;bool&#xff09;和前缀查询&#xff08;prefix&#xff09;的功能&#xff0c;适用…...

Unity-Mirror网络框架-从入门到精通之Multiple Additive Scenes示例

文章目录 前言Multiple Additive Scenes示例Additive Scenes示例MultiSceneNetManagerPhysicsCollisionRewardSpawner总结前言 在现代游戏开发中,网络功能日益成为提升游戏体验的关键组成部分。本系列文章将为读者提供对Mirror网络框架的深入了解,涵盖从基础到高级的多个主题…...

[开源]MaxKb+Ollama 构建RAG私有化知识库

MaxKbOllama&#xff0c;基于RAG方案构专属私有知识库 关于RAG工作原理实现方案 一、什么是MaxKb&#xff1f;二、MaxKb的核心功能三、MaxKb的安装与使用四、MaxKb的适用场景五、安装方案、 docker版Docker Desktop安装配置MaxKb安装和配置 总结和问题 MaxKB 是一款基于 LLM 大…...

Ubuntu 下 nginx-1.24.0 源码分析 - NGX_HAVE_GETTIMEZONE 宏

表示当前平台支持通过 gettimezone() 直接获取时区偏移值&#xff08;以分钟为单位&#xff09; 该宏用于适配不同操作系统对时区信息获取方式的差异。 当 NGX_HAVE_GETTIMEZONE 被定义时&#xff0c;Nginx 会调用 ngx_gettimezone() 获取时区偏移 在 Ubuntu 环境下&#xff0c…...

数据结构之队列,哈希表

一 队列(先进先出) 1.定义&#xff1a;从一端进行数据插入&#xff0c;另一端进行删除的线性存储结构 队列类型 常见操作 - 入队&#xff08;Enqueue&#xff09;&#xff1a;将新元素添加到队列的尾部。若队列有空间&#xff0c;新元素会成为队列的新尾部元素&#xff1b;若…...

缓存的介绍

相关面试题 &#xff1a; ● 为什么要用缓存&#xff1f; ● 本地缓存应该怎么做&#xff1f; ● 为什么要有分布式缓存?/为什么不直接用本地缓存? ● 为什么要用多级缓存&#xff1f; ● 多级缓存适合哪些业务场景&#xff1f; 缓存思想 空间换时间(索引&#xff0c;集群&a…...

372_C++_当有多个通道,开启不同告警的同一种的开关时,限制该开关的打开数量(比如视频上传开关)

GetCloudUploadNum函数 GetCloudUploadNum 函数主要用于统计和控制云端视频上传的通道数量,其主要功能如下: 功能目的// 检查每个通道的云端视频上传配置,并统计启用云端上传的通道总数 int CloudUploadNum = 0; bool InValidCloudUploadChn[MAX_CHN_NUMPARA] = {};...

Jenkins 配置 Git Parameter 四

Jenkins 配置 Git Parameter 四 一、开启 项目参数设置 勾选 This project is parameterised 二、添加 Git Parameter 如果此处不显示 Git Parameter 说明 Jenkins 还没有安装 Git Parameter plugin 插件&#xff0c;请先安装插件 Jenkins 安装插件 三、设置基本参数 点击…...

Unity崩溃后信息结合符号表来查看问题

目录 SO文件符号表对调试和分析的重要性调试方面分析方面 错误数据安装Logcat解释符号表设置符号文件路径生成解析 相关参考 SO文件 so 文件&#xff08;Shared Object File&#xff0c;共享目标文件&#xff09;和符号表紧密相关&#xff0c;它们在程序的运行、调试和分析过程…...

【云安全】云原生- K8S API Server 未授权访问

API Server 是 Kubernetes 集群的核心管理接口&#xff0c;所有资源请求和操作都通过 kube-apiserver 提供的 API 进行处理。默认情况下&#xff0c;API Server 会监听两个端口&#xff1a;8080 和 6443。如果配置不当&#xff0c;可能会导致未授权访问的安全风险。 8080 端口…...