当前位置: 首页 > news >正文

gstreamer之GstVideoDecoder源码剖析

GStreamer 中的 GstVideoDecoder 基类旨在为实现视频解码器提供一个框架。它定义了一套规则和规范,用于指导基类与其派生子类(具体的视频解码器)之间如何交互与协作。

/*** SECTION:gstvideodecoder* @title: GstVideoDecoder* @short_description: Base class for video decoders** This base class is for video decoders turning encoded data into raw video* frames.** The GstVideoDecoder base class and derived subclasses should cooperate as* follows:** ## Configuration**   * Initially, GstVideoDecoder calls @start when the decoder element*     is activated, which allows the subclass to perform any global setup.**   * GstVideoDecoder calls @set_format to inform the subclass of caps*     describing input video data that it is about to receive, including*     possibly configuration data.*     While unlikely, it might be called more than once, if changing input*     parameters require reconfiguration.**   * Incoming data buffers are processed as needed, described in Data*     Processing below.**   * GstVideoDecoder calls @stop at end of all processing.** ## Data processing**   * The base class gathers input data, and optionally allows subclass*     to parse this into subsequently manageable chunks, typically*     corresponding to and referred to as 'frames'.**   * Each input frame is provided in turn to the subclass' @handle_frame*     callback.*   * When the subclass enables the subframe mode with `gst_video_decoder_set_subframe_mode`,*     the base class will provide to the subclass the same input frame with*     different input buffers to the subclass @handle_frame*     callback. During this call, the subclass needs to take*     ownership of the input_buffer as @GstVideoCodecFrame.input_buffer*     will have been changed before the next subframe buffer is received.*     The subclass will call `gst_video_decoder_have_last_subframe`*     when a new input frame can be created by the base class.*     Every subframe will share the same @GstVideoCodecFrame.output_buffer*     to write the decoding result. The subclass is responsible to protect*     its access.**   * If codec processing results in decoded data, the subclass should call*     @gst_video_decoder_finish_frame to have decoded data pushed*     downstream. In subframe mode*     the subclass should call @gst_video_decoder_finish_subframe until the*     last subframe where it should call @gst_video_decoder_finish_frame.*     The subclass can detect the last subframe using GST_VIDEO_BUFFER_FLAG_MARKER*     on buffers or using its own logic to collect the subframes.*     In case of decoding failure, the subclass must call*     @gst_video_decoder_drop_frame or @gst_video_decoder_drop_subframe,*     to allow the base class to do timestamp and offset tracking, and possibly*     to requeue the frame for a later attempt in the case of reverse playback.** ## Shutdown phase**   * The GstVideoDecoder class calls @stop to inform the subclass that data*     parsing will be stopped.** ## Additional Notes**   * Seeking/Flushing**     * When the pipeline is seeked or otherwise flushed, the subclass is*       informed via a call to its @reset callback, with the hard parameter*       set to true. This indicates the subclass should drop any internal data*       queues and timestamps and prepare for a fresh set of buffers to arrive*       for parsing and decoding.**   * End Of Stream**     * At end-of-stream, the subclass @parse function may be called some final*       times with the at_eos parameter set to true, indicating that the element*       should not expect any more data to be arriving, and it should parse and*       remaining frames and call gst_video_decoder_have_frame() if possible.** The subclass is responsible for providing pad template caps for* source and sink pads. The pads need to be named "sink" and "src". It also* needs to provide information about the output caps, when they are known.* This may be when the base class calls the subclass' @set_format function,* though it might be during decoding, before calling* @gst_video_decoder_finish_frame. This is done via* @gst_video_decoder_set_output_state** The subclass is also responsible for providing (presentation) timestamps* (likely based on corresponding input ones).  If that is not applicable* or possible, the base class provides limited framerate based interpolation.** Similarly, the base class provides some limited (legacy) seeking support* if specifically requested by the subclass, as full-fledged support* should rather be left to upstream demuxer, parser or alike.  This simple* approach caters for seeking and duration reporting using estimated input* bitrates. To enable it, a subclass should call* @gst_video_decoder_set_estimate_rate to enable handling of incoming* byte-streams.** The base class provides some support for reverse playback, in particular* in case incoming data is not packetized or upstream does not provide* fragments on keyframe boundaries.  However, the subclass should then be* prepared for the parsing and frame processing stage to occur separately* (in normal forward processing, the latter immediately follows the former),* The subclass also needs to ensure the parsing stage properly marks* keyframes, unless it knows the upstream elements will do so properly for* incoming data.** The bare minimum that a functional subclass needs to implement is:**   * Provide pad templates*   * Inform the base class of output caps via*      @gst_video_decoder_set_output_state**   * Parse input data, if it is not considered packetized from upstream*      Data will be provided to @parse which should invoke*      @gst_video_decoder_add_to_frame and @gst_video_decoder_have_frame to*      separate the data belonging to each video frame.**   * Accept data in @handle_frame and provide decoded results to*      @gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.*/

GstVideoDecoderClass定义的虚函数:

/*** GstVideoDecoderClass:* @open:           Optional.*                  Called when the element changes to GST_STATE_READY.*                  Allows opening external resources.* @close:          Optional.*                  Called when the element changes to GST_STATE_NULL.*                  Allows closing external resources.* @start:          Optional.*                  Called when the element starts processing.*                  Allows opening external resources.* @stop:           Optional.*                  Called when the element stops processing.*                  Allows closing external resources.* @set_format:     Notifies subclass of incoming data format (caps).* @parse:          Required for non-packetized input.*                  Allows chopping incoming data into manageable units (frames)*                  for subsequent decoding.* @reset:          Optional.*                  Allows subclass (decoder) to perform post-seek semantics reset.*                  Deprecated.* @handle_frame:   Provides input data frame to subclass. In subframe mode, the subclass needs*                  to take ownership of @GstVideoCodecFrame.input_buffer as it will be modified*                  by the base class on the next subframe buffer receiving.* @finish:         Optional.*                  Called to request subclass to dispatch any pending remaining*                  data at EOS. Sub-classes can refuse to decode new data after.* @drain:	    Optional.*                  Called to request subclass to decode any data it can at this*                  point, but that more data may arrive after. (e.g. at segment end).*                  Sub-classes should be prepared to handle new data afterward,*                  or seamless segment processing will break. Since: 1.6* @sink_event:     Optional.*                  Event handler on the sink pad. This function should return*                  TRUE if the event was handled and should be discarded*                  (i.e. not unref'ed).*                  Subclasses should chain up to the parent implementation to*                  invoke the default handler.* @src_event:      Optional.*                  Event handler on the source pad. This function should return*                  TRUE if the event was handled and should be discarded*                  (i.e. not unref'ed).*                  Subclasses should chain up to the parent implementation to*                  invoke the default handler.* @negotiate:      Optional.*                  Negotiate with downstream and configure buffer pools, etc.*                  Subclasses should chain up to the parent implementation to*                  invoke the default handler.* @decide_allocation: Optional.*                     Setup the allocation parameters for allocating output*                     buffers. The passed in query contains the result of the*                     downstream allocation query.*                     Subclasses should chain up to the parent implementation to*                     invoke the default handler.* @propose_allocation: Optional.*                      Propose buffer allocation parameters for upstream elements.*                      Subclasses should chain up to the parent implementation to*                      invoke the default handler.* @flush:              Optional.*                      Flush all remaining data from the decoder without*                      pushing it downstream. Since: 1.2* @sink_query:     Optional.*                  Query handler on the sink pad. This function should*                  return TRUE if the query could be performed. Subclasses*                  should chain up to the parent implementation to invoke the*                  default handler. Since: 1.4* @src_query:      Optional.*                  Query handler on the source pad. This function should*                  return TRUE if the query could be performed. Subclasses*                  should chain up to the parent implementation to invoke the*                  default handler. Since: 1.4* @getcaps:        Optional.*                  Allows for a custom sink getcaps implementation.*                  If not implemented, default returns*                  gst_video_decoder_proxy_getcaps*                  applied to sink template caps.* @transform_meta: Optional. Transform the metadata on the input buffer to the*                  output buffer. By default this method is copies all meta without*                  tags and meta with only the "video" tag. subclasses can*                  implement this method and return %TRUE if the metadata is to be*                  copied. Since: 1.6** Subclasses can override any of the available virtual methods or not, as* needed. At minimum @handle_frame needs to be overridden, and @set_format* and likely as well.  If non-packetized input is supported or expected,* @parse needs to be overridden as well.*/

如注释所说子类至少需要重写handle_frame和set_format,如果non-packetized要重写parse方法。

本文以jpegdec插件为例,重写的关键函数:

  vdec_class->start = gst_jpeg_dec_start;vdec_class->stop = gst_jpeg_dec_stop;vdec_class->flush = gst_jpeg_dec_flush;vdec_class->parse = gst_jpeg_dec_parse;vdec_class->set_format = gst_jpeg_dec_set_format;vdec_class->handle_frame = gst_jpeg_dec_handle_frame;vdec_class->decide_allocation = gst_jpeg_dec_decide_allocation;vdec_class->sink_event = gst_jpeg_dec_sink_event;

: Notifies subclass of incoming data format (caps)

set_format调用在gst_video_decoder_sink_event_default->gst_video_decoder_setcaps

    case GST_EVENT_CAPS:{GstCaps *caps;gst_event_parse_caps (event, &caps);ret = gst_video_decoder_setcaps (decoder, caps);gst_event_unref (event);event = NULL;break;}
static gboolean
gst_video_decoder_setcaps (GstVideoDecoder * decoder, GstCaps * caps)
{GstVideoDecoderClass *decoder_class;GstVideoCodecState *state;gboolean ret = TRUE;decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);GST_DEBUG_OBJECT (decoder, "setcaps %" GST_PTR_FORMAT, caps);GST_VIDEO_DECODER_STREAM_LOCK (decoder);if (decoder->priv->input_state) {GST_DEBUG_OBJECT (decoder,"Checking if caps changed old %" GST_PTR_FORMAT " new %" GST_PTR_FORMAT,decoder->priv->input_state->caps, caps);if (gst_caps_is_equal (decoder->priv->input_state->caps, caps))goto caps_not_changed;}state = _new_input_state (caps);if (G_UNLIKELY (state == NULL))goto parse_fail;if (decoder_class->set_format)ret = decoder_class->set_format (decoder, state);if (!ret)goto refused_format;if (decoder->priv->input_state)gst_video_codec_state_unref (decoder->priv->input_state);decoder->priv->input_state = state;GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);return ret;
}
/*** GstVideoCodecState:* @info: The #GstVideoInfo describing the stream* @caps: The #GstCaps used in the caps negotiation of the pad.* @codec_data: a #GstBuffer corresponding to the*     'codec_data' field of a stream, or NULL.* @allocation_caps: The #GstCaps for allocation query and pool*     negotiation. Since: 1.10* @mastering_display_info: Mastering display color volume information*     (HDR metadata) for the stream. Since: 1.20* @content_light_level: Content light level information for the stream.*     Since: 1.20** Structure representing the state of an incoming or outgoing video* stream for encoders and decoders.** Decoders and encoders will receive such a state through their* respective @set_format vmethods.** Decoders and encoders can set the downstream state, by using the* gst_video_decoder_set_output_state() or* gst_video_encoder_set_output_state() methods.*/
/*** GstVideoCodecState.mastering_display_info:** Mastering display color volume information (HDR metadata) for the stream.** Since: 1.20*/
/*** GstVideoCodecState.content_light_level:** Content light level information for the stream.** Since: 1.20*/
struct _GstVideoCodecState
{/*< private >*/gint ref_count;/*< public >*/GstVideoInfo info;GstCaps *caps;GstBuffer *codec_data;GstCaps *allocation_caps;GstVideoMasteringDisplayInfo *mastering_display_info;GstVideoContentLightLevel *content_light_level;/*< private >*/gpointer padding[GST_PADDING_LARGE - 3];
};

上游插件通过GST_EVENT_CAPS事件设置caps,然后保存在了GstVideoCodecState中。

继续看gst_video_decoder_change_state

  switch (transition) {case GST_STATE_CHANGE_NULL_TO_READY:/* open device/library if needed */if (decoder_class->open && !decoder_class->open (decoder))goto open_failed;break;case GST_STATE_CHANGE_READY_TO_PAUSED:GST_VIDEO_DECODER_STREAM_LOCK (decoder);gst_video_decoder_reset (decoder, TRUE, TRUE);GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);/* Initialize device/library if needed */if (decoder_class->start && !decoder_class->start (decoder))goto start_failed;break;default:break;}

会依次调用子类的decoder_class->open和decoder_class->start。jpegdec没有实现open

static gboolean
gst_jpeg_dec_start (GstVideoDecoder * bdec)
{GstJpegDec *dec = (GstJpegDec *) bdec;#ifdef JCS_EXTENSIONSdec->format_convert = FALSE;
#endifdec->saw_header = FALSE;dec->parse_entropy_len = 0;dec->parse_resync = FALSE;gst_video_decoder_set_packetized (bdec, FALSE);return TRUE;
}

上游插件push数据调用chain函数

static GstFlowReturn
gst_video_decoder_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
{GstVideoDecoder *decoder;GstFlowReturn ret = GST_FLOW_OK;decoder = GST_VIDEO_DECODER (parent);if (G_UNLIKELY (!decoder->priv->input_state && decoder->priv->needs_format))goto not_negotiated;GST_LOG_OBJECT (decoder,"chain PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT " duration %"GST_TIME_FORMAT " size %" G_GSIZE_FORMAT " flags %x",GST_TIME_ARGS (GST_BUFFER_PTS (buf)),GST_TIME_ARGS (GST_BUFFER_DTS (buf)),GST_TIME_ARGS (GST_BUFFER_DURATION (buf)),gst_buffer_get_size (buf), GST_BUFFER_FLAGS (buf));GST_VIDEO_DECODER_STREAM_LOCK (decoder);/* NOTE:* requiring the pad to be negotiated makes it impossible to use* oggdemux or filesrc ! decoder */if (decoder->input_segment.format == GST_FORMAT_UNDEFINED) {GstEvent *event;GstSegment *segment = &decoder->input_segment;GST_WARNING_OBJECT (decoder,"Received buffer without a new-segment. ""Assuming timestamps start from 0.");gst_segment_init (segment, GST_FORMAT_TIME);event = gst_event_new_segment (segment);decoder->priv->current_frame_events =g_list_prepend (decoder->priv->current_frame_events, event);}decoder->priv->had_input_data = TRUE;if (decoder->input_segment.rate > 0.0)ret = gst_video_decoder_chain_forward (decoder, buf, FALSE);elseret = gst_video_decoder_chain_reverse (decoder, buf);GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);return ret;/* ERRORS */
not_negotiated:{GST_ELEMENT_ERROR (decoder, CORE, NEGOTIATION, (NULL),("decoder not initialized"));gst_buffer_unref (buf);return GST_FLOW_NOT_NEGOTIATED;}
}

chain函数中根据decoder->input_segment.rate 调用

  if (decoder->input_segment.rate > 0.0)ret = gst_video_decoder_chain_forward (decoder, buf, FALSE);elseret = gst_video_decoder_chain_reverse (decoder, buf);

重点看下gst_video_decoder_chain_forward

static GstFlowReturn
gst_video_decoder_chain_forward (GstVideoDecoder * decoder,GstBuffer * buf, gboolean at_eos)
{GstVideoDecoderPrivate *priv;GstVideoDecoderClass *klass;GstFlowReturn ret = GST_FLOW_OK;klass = GST_VIDEO_DECODER_GET_CLASS (decoder);priv = decoder->priv;g_return_val_if_fail (priv->packetized || klass->parse, GST_FLOW_ERROR);/* Draining on DISCONT is handled in chain_reverse() for reverse playback,* and this function would only be called to get everything collected GOP* by GOP in the parse_gather list */if (decoder->input_segment.rate > 0.0 && GST_BUFFER_IS_DISCONT (buf)&& (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS))ret = gst_video_decoder_drain_out (decoder, FALSE);if (priv->current_frame == NULL)priv->current_frame = gst_video_decoder_new_frame (decoder);if (!priv->packetized)gst_video_decoder_add_buffer_info (decoder, buf);priv->input_offset += gst_buffer_get_size (buf);if (priv->packetized) {GstVideoCodecFrame *frame;gboolean was_keyframe = FALSE;frame = priv->current_frame;frame->abidata.ABI.num_subframes++;if (gst_video_decoder_get_subframe_mode (decoder)) {/* End the frame if the marker flag is set */if (!GST_BUFFER_FLAG_IS_SET (buf, GST_VIDEO_BUFFER_FLAG_MARKER)&& (decoder->input_segment.rate > 0.0))priv->current_frame = gst_video_codec_frame_ref (frame);elsepriv->current_frame = NULL;} else {priv->current_frame = frame;}if (!GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)) {was_keyframe = TRUE;GST_DEBUG_OBJECT (decoder, "Marking current_frame as sync point");GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (frame);}if (frame->input_buffer) {gst_video_decoder_copy_metas (decoder, frame, frame->input_buffer, buf);gst_buffer_unref (frame->input_buffer);}frame->input_buffer = buf;if (decoder->input_segment.rate < 0.0) {priv->parse_gather = g_list_prepend (priv->parse_gather, frame);priv->current_frame = NULL;} else {ret = gst_video_decoder_decode_frame (decoder, frame);if (!gst_video_decoder_get_subframe_mode (decoder))priv->current_frame = NULL;}/* If in trick mode and it was a keyframe, drain decoder to avoid extra* latency. Only do this for forwards playback as reverse playback handles* draining on keyframes in flush_parse(), and would otherwise call back* from drain_out() to here causing an infinite loop.* Also this function is only called for reverse playback to gather frames* GOP by GOP, and does not do any actual decoding. That would be done by* flush_decode() */if (ret == GST_FLOW_OK && was_keyframe && decoder->input_segment.rate > 0.0&& (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS))ret = gst_video_decoder_drain_out (decoder, FALSE);} else {gst_adapter_push (priv->input_adapter, buf);ret = gst_video_decoder_parse_available (decoder, at_eos, TRUE);}if (ret == GST_VIDEO_DECODER_FLOW_NEED_DATA)return GST_FLOW_OK;return ret;
}

关键流程:

(1)如果priv->packetized == false,会走直接走gst_video_decoder_decode_frame,然后调用子类的decoder_class->handle_frame函数

(2) 如果priv->packetized == true,会走gst_adapter_push数据推送到input_adapter,并且调用

gst_video_decoder_parse_available,继而调用子类的decoder_class->parse函数进行数据解析。

jpegdec的gst_jpeg_dec_handle_frame,完成了数据的解码处理

static GstFlowReturn
gst_jpeg_dec_handle_frame (GstVideoDecoder * bdec, GstVideoCodecFrame * frame)
{GstFlowReturn ret = GST_FLOW_OK;GstJpegDec *dec = (GstJpegDec *) bdec;GstVideoFrame vframe;gint num_fields;              /* number of fields (1 or 2) */gint output_height;           /* height of output image (one or two fields) */gint height;                  /* height of current frame (whole image or a field) */gint width;guint code;gboolean need_unmap = TRUE;GstVideoCodecState *state = NULL;gboolean release_frame = TRUE;gboolean has_eoi;guint8 *data;gsize nbytes;if (!gst_buffer_map (frame->input_buffer, &dec->current_frame_map,GST_MAP_READ))goto map_failed;data = dec->current_frame_map.data;nbytes = dec->current_frame_map.size;if (nbytes < 2)goto need_more_data;has_eoi = ((data[nbytes - 2] == 0xff) && (data[nbytes - 1] == 0xd9));/* some cameras fail to send an end-of-image marker (EOI),* add it if that is the case. */if (!has_eoi) {GstMapInfo map;GstBuffer *eoibuf = gst_buffer_new_and_alloc (2);/* unmap, will add EOI and remap at the end */gst_buffer_unmap (frame->input_buffer, &dec->current_frame_map);gst_buffer_map (eoibuf, &map, GST_MAP_WRITE);map.data[0] = 0xff;map.data[1] = 0xd9;gst_buffer_unmap (eoibuf, &map);/* append to input buffer, and remap */frame->input_buffer = gst_buffer_append (frame->input_buffer, eoibuf);gst_buffer_map (frame->input_buffer, &dec->current_frame_map, GST_MAP_READ);GST_DEBUG ("fixup EOI marker added");}dec->current_frame = frame;dec->cinfo.src->next_input_byte = dec->current_frame_map.data;dec->cinfo.src->bytes_in_buffer = dec->current_frame_map.size;if (setjmp (dec->jerr.setjmp_buffer)) {code = dec->jerr.pub.msg_code;if (code == JERR_INPUT_EOF) {GST_DEBUG ("jpeg input EOF error, we probably need more data");goto need_more_data;}goto decode_error;}/* read header and check values */ret = gst_jpeg_dec_prepare_decode (dec);if (G_UNLIKELY (ret == GST_FLOW_ERROR))goto done;width = dec->cinfo.output_width;height = dec->cinfo.output_height;/* is it interlaced MJPEG? (we really don't want to scan the jpeg data* to see if there are two SOF markers in the packet to detect this) */if (gst_video_decoder_get_packetized (bdec) &&dec->input_state &&dec->input_state->info.height > height &&dec->input_state->info.height <= (height * 2)&& dec->input_state->info.width == width) {GST_LOG_OBJECT (dec,"looks like an interlaced image: ""input width/height of %dx%d with JPEG frame width/height of %dx%d",dec->input_state->info.width, dec->input_state->info.height, width,height);output_height = dec->input_state->info.height;height = dec->input_state->info.height / 2;num_fields = 2;GST_LOG_OBJECT (dec, "field height=%d", height);} else {output_height = height;num_fields = 1;}gst_jpeg_dec_negotiate (dec, width, output_height,dec->cinfo.jpeg_color_space, num_fields == 2);state = gst_video_decoder_get_output_state (bdec);ret = gst_video_decoder_allocate_output_frame (bdec, frame);if (G_UNLIKELY (ret != GST_FLOW_OK))goto alloc_failed;if (!gst_video_frame_map (&vframe, &state->info, frame->output_buffer,GST_MAP_READWRITE))goto alloc_failed;if (setjmp (dec->jerr.setjmp_buffer)) {code = dec->jerr.pub.msg_code;gst_video_frame_unmap (&vframe);goto decode_error;}GST_LOG_OBJECT (dec, "width %d, height %d, fields %d", width, output_height,num_fields);ret = gst_jpeg_dec_decode (dec, &vframe, width, height, 1, num_fields);if (G_UNLIKELY (ret != GST_FLOW_OK)) {gst_video_frame_unmap (&vframe);goto decode_failed;}if (setjmp (dec->jerr.setjmp_buffer)) {code = dec->jerr.pub.msg_code;gst_video_frame_unmap (&vframe);goto decode_error;}/* decode second field if there is one */if (num_fields == 2) {GstVideoFormat field2_format;/* Checked above before setting num_fields to 2 */g_assert (dec->input_state != NULL);/* skip any chunk or padding bytes before the next SOI marker; both fields* are in one single buffer here, so direct access should be fine here */while (dec->jsrc.pub.bytes_in_buffer > 2 &&GST_READ_UINT16_BE (dec->jsrc.pub.next_input_byte) != 0xffd8) {--dec->jsrc.pub.bytes_in_buffer;++dec->jsrc.pub.next_input_byte;}if (gst_jpeg_dec_prepare_decode (dec) != GST_FLOW_OK) {GST_WARNING_OBJECT (dec, "problem reading jpeg header of 2nd field");/* FIXME: post a warning message here? */gst_video_frame_unmap (&vframe);goto decode_failed;}/* check if format has changed for the second field */
#ifdef JCS_EXTENSIONSif (dec->format_convert) {field2_format = dec->format;} else
#endif{switch (dec->cinfo.jpeg_color_space) {case JCS_RGB:field2_format = GST_VIDEO_FORMAT_RGB;break;case JCS_GRAYSCALE:field2_format = GST_VIDEO_FORMAT_GRAY8;break;default:field2_format = GST_VIDEO_FORMAT_I420;break;}}GST_LOG_OBJECT (dec,"got for second field of interlaced image: ""input width/height of %dx%d with JPEG frame width/height of %dx%d",dec->input_state->info.width, dec->input_state->info.height,dec->cinfo.output_width, dec->cinfo.output_height);if (dec->cinfo.output_width != GST_VIDEO_INFO_WIDTH (&state->info) ||GST_VIDEO_INFO_HEIGHT (&state->info) <= dec->cinfo.output_height ||GST_VIDEO_INFO_HEIGHT (&state->info) > (dec->cinfo.output_height * 2) ||field2_format != GST_VIDEO_INFO_FORMAT (&state->info)) {GST_WARNING_OBJECT (dec, "second field has different format than first");gst_video_frame_unmap (&vframe);goto decode_failed;}ret = gst_jpeg_dec_decode (dec, &vframe, width, height, 2, 2);if (G_UNLIKELY (ret != GST_FLOW_OK)) {gst_video_frame_unmap (&vframe);goto decode_failed;}}gst_video_frame_unmap (&vframe);gst_buffer_unmap (frame->input_buffer, &dec->current_frame_map);ret = gst_video_decoder_finish_frame (bdec, frame);release_frame = FALSE;need_unmap = FALSE;done:exit:if (need_unmap)gst_buffer_unmap (frame->input_buffer, &dec->current_frame_map);if (release_frame)gst_video_decoder_release_frame (bdec, frame);if (state)gst_video_codec_state_unref (state);return ret;/* special cases */
need_more_data:{GST_LOG_OBJECT (dec, "we need more data");ret = GST_FLOW_OK;goto exit;}/* ERRORS */
map_failed:{GST_ELEMENT_ERROR (dec, RESOURCE, READ, (_("Failed to read memory")),("gst_buffer_map() failed for READ access"));ret = GST_FLOW_ERROR;goto exit;}
decode_error:{gchar err_msg[JMSG_LENGTH_MAX];dec->jerr.pub.format_message ((j_common_ptr) (&dec->cinfo), err_msg);GST_VIDEO_DECODER_ERROR (dec, 1, STREAM, DECODE,(_("Failed to decode JPEG image")), ("Decode error #%u: %s", code,err_msg), ret);gst_buffer_unmap (frame->input_buffer, &dec->current_frame_map);gst_video_decoder_drop_frame (bdec, frame);release_frame = FALSE;need_unmap = FALSE;jpeg_abort_decompress (&dec->cinfo);goto done;}
decode_failed:{/* already posted an error message */goto done;}
alloc_failed:{const gchar *reason;reason = gst_flow_get_name (ret);GST_DEBUG_OBJECT (dec, "failed to alloc buffer, reason %s", reason);/* Reset for next time */jpeg_abort_decompress (&dec->cinfo);if (ret != GST_FLOW_EOS && ret != GST_FLOW_FLUSHING &&ret != GST_FLOW_NOT_LINKED) {GST_VIDEO_DECODER_ERROR (dec, 1, STREAM, DECODE,(_("Failed to decode JPEG image")),("Buffer allocation failed, reason: %s", reason), ret);jpeg_abort_decompress (&dec->cinfo);}goto exit;}
}

而priv->packetized流会走流程2也就是parse,parse会不断解析数据,直到解析到完整的一帧数据后,然后会走到have_full_frame流程。

static GstFlowReturn
gst_jpeg_dec_parse (GstVideoDecoder * bdec, GstVideoCodecFrame * frame,GstAdapter * adapter, gboolean at_eos)
{guint size;gint toadd = 0;gboolean resync;gint offset = 0, noffset;GstJpegDec *dec = (GstJpegDec *) bdec;GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (frame);/* FIXME : The overhead of using scan_uint32 is massive */size = gst_adapter_available (adapter);GST_DEBUG ("Parsing jpeg image data (%u bytes)", size);if (at_eos) {GST_DEBUG ("Flushing all data out");toadd = size;/* If we have leftover data, throw it away */if (!dec->saw_header)goto drop_frame;goto have_full_frame;}if (size < 8)goto need_more_data;if (!dec->saw_header) {gint ret;/* we expect at least 4 bytes, first of which start marker */ret =gst_adapter_masked_scan_uint32 (adapter, 0xffff0000, 0xffd80000, 0,size - 4);GST_DEBUG ("ret:%d", ret);if (ret < 0)goto need_more_data;if (ret) {gst_adapter_flush (adapter, ret);size -= ret;}dec->saw_header = TRUE;}while (1) {guint frame_len;guint32 value;GST_DEBUG ("offset:%d, size:%d", offset, size);noffset =gst_adapter_masked_scan_uint32_peek (adapter, 0x0000ff00, 0x0000ff00,offset, size - offset, &value);/* lost sync if 0xff marker not where expected */if ((resync = (noffset != offset))) {GST_DEBUG ("Lost sync at 0x%08x, resyncing", offset + 2);}/* may have marker, but could have been resyncng */resync = resync || dec->parse_resync;/* Skip over extra 0xff */while ((noffset >= 0) && ((value & 0xff) == 0xff)) {noffset++;noffset =gst_adapter_masked_scan_uint32_peek (adapter, 0x0000ff00, 0x0000ff00,noffset, size - noffset, &value);}/* enough bytes left for marker? (we need 0xNN after the 0xff) */if (noffset < 0) {GST_DEBUG ("at end of input and no EOI marker found, need more data");goto need_more_data;}/* now lock on the marker we found */offset = noffset;value = value & 0xff;if (value == 0xd9) {GST_DEBUG ("0x%08x: EOI marker", offset + 2);/* clear parse state */dec->saw_header = FALSE;dec->parse_resync = FALSE;toadd = offset + 4;goto have_full_frame;}if (value == 0xd8) {GST_DEBUG ("0x%08x: SOI marker before EOI marker", offset + 2);/* clear parse state */dec->saw_header = FALSE;dec->parse_resync = FALSE;toadd = offset;goto have_full_frame;}if (value >= 0xd0 && value <= 0xd7)frame_len = 0;else {/* peek tag and subsequent length */if (offset + 2 + 4 > size)goto need_more_data;elsegst_adapter_masked_scan_uint32_peek (adapter, 0x0, 0x0, offset + 2, 4,&frame_len);frame_len = frame_len & 0xffff;}GST_DEBUG ("0x%08x: tag %02x, frame_len=%u", offset + 2, value, frame_len);/* the frame length includes the 2 bytes for the length; here we want at* least 2 more bytes at the end for an end marker */if (offset + 2 + 2 + frame_len + 2 > size) {goto need_more_data;}if (gst_jpeg_dec_parse_tag_has_entropy_segment (value)) {guint eseglen = dec->parse_entropy_len;GST_DEBUG ("0x%08x: finding entropy segment length (eseglen:%d)",offset + 2, eseglen);if (size < offset + 2 + frame_len + eseglen)goto need_more_data;noffset = offset + 2 + frame_len + dec->parse_entropy_len;while (1) {GST_DEBUG ("noffset:%d, size:%d, size - noffset:%d",noffset, size, size - noffset);noffset = gst_adapter_masked_scan_uint32_peek (adapter, 0x0000ff00,0x0000ff00, noffset, size - noffset, &value);if (noffset < 0) {/* need more data */dec->parse_entropy_len = size - offset - 4 - frame_len - 2;goto need_more_data;}if ((value & 0xff) != 0x00) {eseglen = noffset - offset - frame_len - 2;break;}noffset++;}dec->parse_entropy_len = 0;frame_len += eseglen;GST_DEBUG ("entropy segment length=%u => frame_len=%u", eseglen,frame_len);}if (resync) {/* check if we will still be in sync if we interpret* this as a sync point and skip this frame */noffset = offset + frame_len + 2;noffset = gst_adapter_masked_scan_uint32 (adapter, 0x0000ff00, 0x0000ff00,noffset, 4);if (noffset < 0) {/* ignore and continue resyncing until we hit the end* of our data or find a sync point that looks okay */offset++;continue;}GST_DEBUG ("found sync at 0x%x", offset + 2);}/* Add current data to output buffer */toadd += frame_len + 2;offset += frame_len + 2;}need_more_data:if (toadd)gst_video_decoder_add_to_frame (bdec, toadd);return GST_VIDEO_DECODER_FLOW_NEED_DATA;have_full_frame:if (toadd)gst_video_decoder_add_to_frame (bdec, toadd);GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (frame);return gst_video_decoder_have_frame (bdec);drop_frame:gst_adapter_flush (adapter, size);return GST_FLOW_OK;
}

其中gst_video_decoder_add_to_frame会将数据push到priv->output_adapter中。还会调用

gst_video_decoder_have_frame,其中关键流程会调用gst_video_decoder_decode_frame继而调用到decoder_class->handle_frame,完成解码。

  /* In reverse playback, just capture and queue frames for later processing */if (decoder->input_segment.rate < 0.0) {priv->parse_gather =g_list_prepend (priv->parse_gather, priv->current_frame);priv->current_frame = NULL;} else {GstVideoCodecFrame *frame = priv->current_frame;frame->abidata.ABI.num_subframes++;/* In subframe mode, we keep a ref for ourselves* as this frame will be kept during the data collection* in parsed mode. The frame reference will be released by* finish_(sub)frame or drop_(sub)frame.*/if (gst_video_decoder_get_subframe_mode (decoder))gst_video_codec_frame_ref (priv->current_frame);elsepriv->current_frame = NULL;/* Decode the frame, which gives away our ref */ret = gst_video_decoder_decode_frame (decoder, frame);}

packetized流,也就是上游送下的数据,并不是可以直接解码的原始数据流,所以必须通过parse从数据中解析出来可解码的原始数据。

/*** gst_video_decoder_set_packetized:* @decoder: a #GstVideoDecoder* @packetized: whether the input data should be considered as packetized.** Allows baseclass to consider input data as packetized or not. If the* input is packetized, then the @parse method will not be called.*/
void
gst_video_decoder_set_packetized (GstVideoDecoder * decoder,gboolean packetized)
{decoder->priv->packetized = packetized;
}

总结:

GstVideoDecoder流程不是很复杂,主要通过set_format设置caps,通过handle_frame我对数据进行解码。

相关文章:

gstreamer之GstVideoDecoder源码剖析

GStreamer 中的 GstVideoDecoder 基类旨在为实现视频解码器提供一个框架。它定义了一套规则和规范&#xff0c;用于指导基类与其派生子类&#xff08;具体的视频解码器&#xff09;之间如何交互与协作。 /*** SECTION:gstvideodecoder* title: GstVideoDecoder* short_descrip…...

Windows部署deepseek R1训练数据后通过AnythingLLM当服务器创建问答页面

如果要了解Windows部署Ollama 、deepseek R1请看我上一篇内容。 这是接上一篇的。 AnythingLLM是一个开源的全栈AI客户端&#xff0c;支持本地部署和API集成。它可以将任何文档或内容转化为上下文&#xff0c;供各种语言模型&#xff08;LLM&#xff09;在对话中使用。以下是…...

嵌入式软件开发--面试总结

&#xff08;1&#xff09;公司简介&#xff1a;做打印机设备、项目涉及到操作系统 &#xff08;2&#xff09;面试内容&#xff1a;笔试题技术面试 //32位单片机c语言程序typedef struct{int a;char b;char c;}str1;typedef struct{char a;int b;char c;}str2;void function…...

测试专项3:算法测试基础理论速查手册

1 算法测试的基本概念 1.1 传统软件测试 vs. 算法测试 在软件工程领域&#xff0c;传统软件测试主要关注程序逻辑的正确性。测试人员通过预设输入与期望输出的对比&#xff0c;确保软件程序能够按照设计要求执行&#xff0c;从而发现代码中的错误或缺陷。常见的测试方法包括单…...

基于Springboot+Typst的PDF生成方案,适用于报告打印/标签打印/二维码打印等

基于SpringbootTypst的PDF生成方案&#xff0c;适用于报告打印/标签打印/二维码打印等。 仅提供后端实现 Typst2pdf-for-report/label/QR code github 环境 JDK11linux/windows/mac 应用场景 适用于定制化的报告模板/标签/条码/二维码等信息的pdf生成方案。通过浏览器的p…...

轻松迁移 Elasticsearch 数据:如何将自建索引导出并导入到另一个实例

概述 在日常的 Elasticsearch 运维和数据管理中&#xff0c;数据迁移是一个常见的需求。无论是为了备份、升级&#xff0c;还是将数据从一个集群迁移到另一个集群&#xff0c;导出和导入索引数据都是至关重要的操作。本文将详细介绍如何将自建 Elasticsearch 实例中的索引数据…...

【C#语言】C#同步与异步编程深度解析:让程序学会“一心多用“

文章目录 ⭐前言⭐一、同步编程&#xff1a;单线程的线性世界&#x1f31f;1、寻找合适的对象✨1) &#x1f31f;7、设计应支持变化 ⭐二、异步编程&#xff1a;多任务的协奏曲⭐三、async/await工作原理揭秘⭐四、最佳实践与性能陷阱⭐五、异步编程适用场景⭐六、性能对比实测…...

【Linux】——环境变量与进程地址空间

文章目录 环境变量环境变量的概念常见的环境变量PATH相关指令 main的三个参数前两个参数第三个参数 程序地址空间进程地址空间 环境变量 环境变量的概念 环境变量一般是指在操作系统中用来指定操作系统运行环境的一些参数&#xff0c;将来会以shell的形式传递给所有进程&…...

docker、docker-compose常用命令

初学者使用的docker、docker-compose常用命令&#xff0c;日常练习&#xff0c;环境简单搭建。 一、docker 1.1、安装docker 1.1.1、yum安装 #安装docker的数据存储驱动包 yum install -y yum-utils device-mapper-persistent-data lvm2 #设置新的安装源、下载配置文件到…...

LS-NET-006-思科MDS 9148S 查看内存

LS-NET-006-思科MDS 9148S 查看内存 方法一&#xff1a;使用 show version​ 命令 该命令可显示设备的基本系统信息&#xff0c;包括内存总量。 登录交换机的CLI&#xff08;通过控制台或SSH连接&#xff09;。输入命令&#xff1a; show version 在输出中查找类似以下内容…...

Pytorch使用手册—自定义函数的双重反向传播与自定义函数融合卷积和批归一化(专题五十二)

1. 使用自定义函数的双重反向传播 有时候,在反向计算图中运行两次反向传播是有用的,例如计算高阶梯度。然而,支持双重反向传播需要对自动求导(autograd)有一定的理解,并且需要小心处理。支持单次反向传播的函数不一定能够支持双重反向传播。在本教程中,我们将展示如何编…...

OpenCV图像拼接(4)图像拼接模块的一个匹配器类cv::detail::BestOf2NearestRangeMatcher

操作系统&#xff1a;ubuntu22.04 OpenCV版本&#xff1a;OpenCV4.9 IDE:Visual Studio Code 编程语言&#xff1a;C11 算法描述 cv::detail::BestOf2NearestRangeMatcher 是 OpenCV 库中用于图像拼接模块的一个匹配器类&#xff0c;专门用于寻找两幅图像之间的最佳特征点匹配…...

springmvc中如何自定义入参注解并自动注入值

在Spring中&#xff0c;HandlerMethodArgumentResolver 是一个非常强大的接口&#xff0c;用于自定义控制器方法参数的解析逻辑。以下是一个完整的示例&#xff0c;展示如何使用 HandlerMethodArgumentResolver 并结合自定义注解来实现特定的参数解析逻辑。 ### **1. 定义自定…...

前端安全之DOMPurify基础使用

DOMPurify时一款专门用于防御XSS攻击的库&#xff0c;通过净化HTML的内容&#xff0c;移除恶意脚本&#xff0c;同时保留安全的HTML标签和数学。以下是基础使用指南&#xff0c;涵盖基础的安装与用法。 1&#xff0c;安装DOMPurify 通过npm或yarn安装 npm install dompurify …...

test skills

一、测试技术 1、python GitHub - taizilongxu/interview_python: 关于Python的面试题 GitHub - JushuangQiao/Python-Offer: 《剑指Offer》面试题Python实现 GitHub - vinta/awesome-python: An opinionated list of awesome Python frameworks, libraries, software and …...

sql-DDL

ddl全称为data definition language(数据定义语言)&#xff0c;用来定义数据库对象(数据库&#xff0c;表&#xff0c;字段) 一.ddl-数据库操作 1.查询数据库 show databases&#xff1b; 2.创建数据库 create database [if not exists] 数据库名 [default charset 字符集]…...

小科普《DNS服务器》

DNS服务器详解 1. 定义与核心作用 DNS&#xff08;域名系统&#xff09;服务器是互联网的核心基础设施&#xff0c;负责将人类可读的域名&#xff08;如www.example.com&#xff09;转换为机器可识别的IP地址&#xff08;如192.0.2.1&#xff09;&#xff0c;从而实现设备间的…...

【操作系统安全】任务3:Linux 网络安全实战命令手册

目录 一、基础网络信息获取 1. 网络接口配置 2. 路由表管理 3. 服务端口监控 二、网络监控与分析 1. 实时流量监控 2. 数据包捕获 3. 网络协议分析 三、渗透测试工具集 1. 端口扫描 2. 漏洞利用 3. 密码破解 四、日志审计与分析 1. 系统日志处理 2. 入侵检测 3…...

介绍 Docker 的基本概念和优势,以及在应用程序开发中的实际应用

Docker 是一种轻量级容器化技术&#xff0c;它允许开发者打包应用程序和其依赖项到一个可移植的容器中&#xff0c;然后在任何环境中运行。Docker 的基本概念包括以下几点&#xff1a; 容器&#xff1a;Docker 使用容器来封装应用程序及其所有依赖项&#xff0c;使其能够在任何…...

数据结构与算法-图论-拓扑排序

前置芝士 概念 拓扑排序&#xff08;Topological Sorting&#xff09;是对有向无环图&#xff08;DAG&#xff0c;Directed Acyclic Graph&#xff09;的顶点进行排序的一种算法。它将图中的所有顶点排成一个线性序列&#xff0c;使得对于图中的任意一条有向边 (u, v)&#x…...

RabbitMQ的高级特性介绍(一)

消息确认机制 ⽣产者发送消息之后, 到达消费端之后, 可能会有以下情况: a. 消息处理成功 b. 消息处理异常 RabbitMQ向消费者发送消息之后, 就会把这条消息删掉, 那么第二种情况, 就会造成消息丢失。 那么如何确保消费端已经成功接收了, 并正确处理了呢? 为了保证消息从队列…...

嵌入式硬件篇---WIFI模块

文章目录 前言一、核心工作原理1. 物理层&#xff08;PHY&#xff09;工作频段2.4GHz5GHz 调制技术直接序列扩频正交频分复用高效数据编码 2. 协议栈架构MAC层Beacon帧4次握手 3. 核心工作模式 二、典型应用场景1. 智能家居系统远程控制环境监测视频监测 2. 工业物联网设备远程…...

Web爬虫利器FireCrawl:全方位助力AI训练与高效数据抓取。本地部署方式

开源地址&#xff1a;https://github.com/mendableai/firecrawl 01、FireCrawl 项目简介 Firecrawl 是一款开源、优秀、尖端的 AI 爬虫工具&#xff0c;专门从事 Web 数据提取&#xff0c;并将其转换为 Markdown 格式或者其他结构化数据。 Firecrawl 还特别上线了一个新的功…...

zabbix数据库溯源

0x00 背景 zabbix数据库如果密码泄露被登录并新增管理员如何快速发现&#xff1f;并进行溯源&#xff1f; 本文介绍数据库本身未开启access log的情况。 0x01 实践 Mysql 数据库查insert SELECT * FROM sys.host_summary_by_statement_type where statement like %insert% 查…...

oracle 索引

Oracle 数据库中的索引是优化查询性能的重要工具&#xff0c;其类型多样&#xff0c;适用于不同场景。以下是 Oracle 索引的主要分类及特点&#xff1a; 1.B-Tree 索引&#xff08;平衡树索引&#xff09; 特点&#xff1a; 默认索引类型&#xff0c;树形结构&#xff08;根、…...

ZooKeeper集群高可用性测试与实践:从规划到故障模拟

#作者&#xff1a;任少近 文章目录 ZooKeeper集群环境规划1.集群数据一致性测试2.集群节点故障测试 ZooKeeper集群高可用性测试的主要目的是确保在分布式环境中&#xff0c;ZooKeeper服务能够持续提供一致性和高可用性的协调服务。 ZooKeeper集群环境规划 节点ipZooKeeper版本…...

RocketMQ 架构

一、RocketMQ 核心架构概述 ​1. 主要组件 ​Name Server&#xff1a; 集群的「中枢神经」&#xff0c;负责 Topic 元数据管理&#xff08;如 Topic 分区分布、Broker 节点状态监控&#xff09;。 ​Broker&#xff1a; 消息存储与流转的核心节点&#xff0c;负责消息的持久化…...

Microchip AN1477中关于LLC数字补偿器的疑问

最近在学习Microchip的AN1477关于LLC的功率级传递函数推导及数字补偿器设计&#xff0c;对其中的2P2Z数字补偿器的系数有一些困惑。我在MATLAB中运行了源程序提供的VMC_LLC.m文件&#xff0c;发现有些地方和AN1477中的结果不一致。现在把相关有疑问的地方列举出来&#xff0c;也…...

力扣热题100(方便自己复习,自用)

力扣热题100 1. 两数之和 - 力扣&#xff08;LeetCode&#xff09; 查找两数之和是不是等于target也就是我们找到一个数之后&#xff0c;用target将其减掉&#xff0c;再寻找应当对应的元素是什么每找到一个数&#xff0c;我们就将其放在集合中&#xff0c;因为集合中可以去重…...

视频翻译器免费哪个好?轻松玩转视频直播翻译

你是不是觉得看外语视频很麻烦&#xff1f;每次遇到喜欢的外语电影、电视剧或动漫&#xff0c;总是要等字幕组的翻译&#xff0c;或者因为语言不通而错过精彩的情节。 这个时候&#xff0c;掌握多语种直播翻译方案就显得尤为重要&#xff0c;有了实时字幕&#xff0c;看外语视…...

深度学习中的“刹车”:正则化如何防止模型“超速”

深度学习中的“刹车”&#xff1a;正则化如何防止模型“超速” 大家好&#xff01;今天我们来聊聊深度学习中的一个重要概念——正则化。 什么是过拟合&#xff1f; 想象一下&#xff0c;你正在教一个孩子认字。你给他看很多猫的图片&#xff0c;他都能正确识别。但是&#…...

【pytest框架源码分析五】pytest插件的注册流程

前文介绍到pytest整体是运用插件来实现其运行流程的。这里仔细介绍下具体过程。 首先进入main方法 def main(args: list[str] | os.PathLike[str] | None None,plugins: Sequence[str | _PluggyPlugin] | None None, ) -> int | ExitCode:"""Perform an i…...

【协作开发】低成本一键复刻github的gitea

在阅读 next-public 时&#xff0c;反思原本的需求&#xff0c;是否本未倒置&#xff0c;故而重新调研当下开源现状。发现 gitea 完全满足商业软件的开发要求&#xff0c;并且价格足够低&#xff0c;使用足够方便&#xff0c;其他同类软件完全不用看了&#xff0c;真是世界级的…...

虚拟机 | Ubuntu操作系统:su和sudo理解及如何处理忘记root密码

系列文章目录 虚拟机 | Ubuntu 安装流程以及界面太小问题解决 虚拟机 | Ubuntu图形化系统&#xff1a; open-vm-tools安装失败以及实现文件拖放 文章目录 系列文章目录前言一、su和sudo是什么&#xff1f;1、su忘记root密码的解决方案无法进入GRUB引导页面 2、sudo推荐使用sud…...

token升级(考虑在分布式环境中布置token,结合session保证请求调用过程中token不会过期。)

思路&#xff1a; 首先&#xff0c;用户的需求是确保使用同一个Token的外部调用都在一个Session中处理。 需要考虑Token与Session绑定、安全措施、Session管理、分布式处理等。 使用Redis作为Session存储&#xff0c; 在Java中 通过Spring Data Redis或Lettuce库实现。 2.生成…...

Flink 通过 Chunjun Oracle LogMiner 实时读取 Oracle 变更日志并写入 Doris 的方案

文章目录 一、 技术背景二、 关键技术1、 Oracle LogMiner2、 Chunjun 的 LogMiner 关键流程3、修复 Chunjun Oracle LogMiner 问题 一、 技术背景 在大数据实时同步场景中&#xff0c;需要将 Oracle 数据库的变更数据&#xff08;CDC&#xff09; 采集并写入 Apache Doris&am…...

若依(RuoYi)框架新手使用指南

若依&#xff08;RuoYi&#xff09;框架新手使用指南 若依&#xff08;RuoYi&#xff09;是一款基于 Spring Boot Vue 的前后端分离企业级开发框架&#xff0c;集成了权限管理、代码生成、监控日志等核心功能&#xff0c;适用于快速构建中后台管理系统。以下是详细的使用指南…...

k8s-coredns-CrashLoopBackOff 工作不正常

本文作者&#xff1a; slience_me 问题描述 # 问题描述 # rootk8s-node1:/home/slienceme# kubectl get pods --all-namespaces # NAMESPACE NAME READY STATUS RESTARTS AGE # kube-flannel kube-flannel-ds-66bcs …...

【css酷炫效果】纯CSS实现粒子旋转动画

【css酷炫效果】纯CSS实现粒子旋转动画 缘创作背景html结构css样式完整代码效果图 想直接拿走的老板&#xff0c;链接放在这里&#xff1a;https://download.csdn.net/download/u011561335/90492008 缘 创作随缘&#xff0c;不定时更新。 创作背景 刚看到csdn出活动了&…...

SQL 通配符

SQL 通配符 在SQL查询中&#xff0c;通配符是一种非常有用的特性&#xff0c;它允许用户在查询时使用特殊字符来匹配一系列的值。本文将详细介绍SQL中的通配符及其用法&#xff0c;帮助读者更好地理解如何在SQL查询中使用通配符。 1. 什么是通配符&#xff1f; 通配符是SQL查…...

【工具】C#防沉迷进程监控工具使用手册

一、软件简介 本工具用于监控指定进程的运行时长&#xff0c;当达到预设时间时通过声音、弹窗、窗口抖动等方式进行提醒&#xff0c;帮助用户合理控制程序使用时间。 软件在上篇文章。 二、系统要求 Windows 7/10/11.NET Framework 4.5 或更高版本 三、快速入门 1. 配置文件…...

4.数据结构-树和二叉树

树和二叉树 4.1树和二叉树的定义4.1.1树的定义4.1.2树的基本术语4.1.3二叉树的定义 4.2二叉树的性质和存储结构4.2.1二叉树的性质4.2.1二叉树的存储结构顺序存储链式存储 4.3遍历二叉树和线索二叉树4.3.1遍历二叉树根据遍历序确定二叉树先序序列创建二叉链表复制二叉树计算二叉…...

【工作记录】F12查看接口信息及postman中使用

可参考 详细教程&#xff1a;如何从前端查看调用接口、传参及返回结果&#xff08;附带图片案例&#xff09;_f12查看接口及参数-CSDN博客 1、接口信息 接口基础知识2&#xff1a;http通信的组成_接口请求信息包括-CSDN博客 HTTP类型接口之请求&响应详解 - 三叔测试笔记…...

k8s搭建kube-prometheus

后续再补一个k8s集群搭建的博客&#xff0c;从0开始搭建k8s集群。使用kube-prometheus非常方便&#xff0c;主要问题只在于拉取镜像。除了拉取镜像外其他时间5分钟即可。耐心等待拉取镜像。 一.kube-prometheus简介 kube-prometheus 是一个专为 Kubernetes 设计的开源监控解决…...

Linux应用:Linux的信号

什么是信号 信号是一种软件中断&#xff0c;用于通知进程系统中发生了某种特定事件。它是操作系统与进程之间&#xff0c;以及进程与进程之间进行异步通信的一种方式。在 Linux 系统中&#xff0c;信号是一种比较简单的进程间通信机制。当一个信号产生时&#xff0c;内核会通过…...

C++特性——RAII、智能指针

RAII 就像new一个需要delete&#xff0c;fopen之后需要fclose&#xff0c;但这样会有隐形问题&#xff08;忘记释放&#xff09;。RAII即用对象把这个过程给包起来&#xff0c;对象构造的时候&#xff0c;new或者fopen&#xff0c;析构的时候delete. 为什么需要智能指针 对于…...

springboot项目,指定用alibaba连接池所需要的配置

1、依赖&#xff1a;引入相关的两个依赖 2、application.yml...

在本地跑通spark环境

官网下载spark 下载spark 解压就好 本地配置环境变量 配置环境变量&#xff08;系统环境变量&#xff09; 新增 SPARK_HOME 变量名&#xff1a;SPARK_HOME 变量值&#xff1a;F:\class\spark\Spark_env\spark-3.4.4-bin-hadoop3 配置 PATH&#xff0c;新增如下&#xff1a…...

python-56-基于Vue和Flask进行前后端分离的项目开发示例实战

文章目录 1 创建Vue前端项目1.1 运行demo1.2 实现需求2 flask部署上述dist(前后端未分离)2.1 代码app.py2.2 运行访问3 nginx部署(前后端分离)3.1 nginx前端服务3.3.1 windows安装nginx3.3.2 修改nginx.conf配置文件3.3.3 启动nginx3.3.3 停止nginx3.2 启动后端服务3.2.1 app.p…...

云盘搭建笔记

报错问题&#xff1a; No input file specified. 伪静态 location / {if (!-e $request_filename) { rewrite ^(.*)$ /index.php/$1 last;break;} } location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php/$1 last; break; } } 设…...