当前位置: 首页 > news >正文

DataX的python3使用

datax这东西本身是python2写的,这导致python3,就各种语法报错,问题是,现在的工程都是python3搞的,这就很难受....

网上找到一篇帖子,可以解决这个问题:

原帖:python3执行datax报错问题解决办法 - 卷心菜的奇妙历险 - 博客园

怕回头原帖看不了了,再记录一次,这,很无奈.....之前一些帖子,突然就看不了了....

解决办法

路径 datax/bin/  下

需要替换的三个文件,注意备份:

1.datax.py

#!/usr/bin/env python
# -*- coding:utf-8 -*-import sys
import os
import signal
import subprocess
import time
import re
import socket
import json
from optparse import OptionParser
from optparse import OptionGroup
from string import Template
import codecs
import platformdef isWindows():return platform.system() == 'Windows'DATAX_HOME = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))DATAX_VERSION = 'DATAX-OPENSOURCE-3.0'
if isWindows():codecs.register(lambda name: name == 'cp65001' and codecs.lookup('utf-8') or None)CLASS_PATH = ("%s/lib/*") % (DATAX_HOME)
else:CLASS_PATH = ("%s/lib/*:.") % (DATAX_HOME)
LOGBACK_FILE = ("%s/conf/logback.xml") % (DATAX_HOME)
DEFAULT_JVM = "-Xms1g -Xmx1g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=%s/log" % (DATAX_HOME)
DEFAULT_PROPERTY_CONF = "-Dfile.encoding=UTF-8 -Dlogback.statusListenerClass=ch.qos.logback.core.status.NopStatusListener -Djava.security.egd=file:///dev/urandom -Ddatax.home=%s -Dlogback.configurationFile=%s" % (DATAX_HOME, LOGBACK_FILE)
ENGINE_COMMAND = "java -server ${jvm} %s -classpath %s  ${params} com.alibaba.datax.core.Engine -mode ${mode} -jobid ${jobid} -job ${job}" % (DEFAULT_PROPERTY_CONF, CLASS_PATH)
REMOTE_DEBUG_CONFIG = "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=9999"RET_STATE = {"KILL": 143,"FAIL": -1,"OK": 0,"RUN": 1,"RETRY": 2
}def getLocalIp():try:return socket.gethostbyname(socket.getfqdn(socket.gethostname()))except:return "Unknown"def suicide(signum, e):global child_processprint >> sys.stderr, "[Error] DataX receive unexpected signal %d, starts to suicide." % (signum)if child_process:child_process.send_signal(signal.SIGQUIT)time.sleep(1)child_process.kill()print >> sys.stderr, "DataX Process was killed ! you did ?"sys.exit(RET_STATE["KILL"])def register_signal():if not isWindows():global child_processsignal.signal(2, suicide)signal.signal(3, suicide)signal.signal(15, suicide)def getOptionParser():usage = "usage: %prog [options] job-url-or-path"parser = OptionParser(usage=usage)prodEnvOptionGroup = OptionGroup(parser, "Product Env Options","Normal user use these options to set jvm parameters, job runtime mode etc. ""Make sure these options can be used in Product Env.")prodEnvOptionGroup.add_option("-j", "--jvm", metavar="<jvm parameters>", dest="jvmParameters", action="store",default=DEFAULT_JVM, help="Set jvm parameters if necessary.")prodEnvOptionGroup.add_option("--jobid", metavar="<job unique id>", dest="jobid", action="store", default="-1",help="Set job unique id when running by Distribute/Local Mode.")prodEnvOptionGroup.add_option("-m", "--mode", metavar="<job runtime mode>",action="store", default="standalone",help="Set job runtime mode such as: standalone, local, distribute. ""Default mode is standalone.")prodEnvOptionGroup.add_option("-p", "--params", metavar="<parameter used in job config>",action="store", dest="params",help='Set job parameter, eg: the source tableName you want to set it by command, ''then you can use like this: -p"-DtableName=your-table-name", ''if you have mutiple parameters: -p"-DtableName=your-table-name -DcolumnName=your-column-name".''Note: you should config in you job tableName with ${tableName}.')prodEnvOptionGroup.add_option("-r", "--reader", metavar="<parameter used in view job config[reader] template>",action="store", dest="reader",type="string",help='View job config[reader] template, eg: mysqlreader,streamreader')prodEnvOptionGroup.add_option("-w", "--writer", metavar="<parameter used in view job config[writer] template>",action="store", dest="writer",type="string",help='View job config[writer] template, eg: mysqlwriter,streamwriter')parser.add_option_group(prodEnvOptionGroup)devEnvOptionGroup = OptionGroup(parser, "Develop/Debug Options","Developer use these options to trace more details of DataX.")devEnvOptionGroup.add_option("-d", "--debug", dest="remoteDebug", action="store_true",help="Set to remote debug mode.")devEnvOptionGroup.add_option("--loglevel", metavar="<log level>", dest="loglevel", action="store",default="info", help="Set log level such as: debug, info, all etc.")parser.add_option_group(devEnvOptionGroup)return parserdef generateJobConfigTemplate(reader, writer):readerRef = "Please refer to the %s document:\n     https://github.com/alibaba/DataX/blob/master/%s/doc/%s.md \n" % (reader,reader,reader)writerRef = "Please refer to the %s document:\n     https://github.com/alibaba/DataX/blob/master/%s/doc/%s.md \n " % (writer,writer,writer)print(readerRef)print(writerRef)jobGuid = 'Please save the following configuration as a json file and  use\n     python {DATAX_HOME}/bin/datax.py {JSON_FILE_NAME}.json \nto run the job.\n'print(jobGuid)jobTemplate={"job": {"setting": {"speed": {"channel": ""}},"content": [{"reader": {},"writer": {}}]}}readerTemplatePath = "%s/plugin/reader/%s/plugin_job_template.json" % (DATAX_HOME,reader)writerTemplatePath = "%s/plugin/writer/%s/plugin_job_template.json" % (DATAX_HOME,writer)try:readerPar = readPluginTemplate(readerTemplatePath);except Exception as e:print("Read reader[%s] template error: can\'t find file %s" % (reader,readerTemplatePath))try:writerPar = readPluginTemplate(writerTemplatePath);except Exception as e:print("Read writer[%s] template error: : can\'t find file %s" % (writer,writerTemplatePath))jobTemplate['job']['content'][0]['reader'] = readerPar;jobTemplate['job']['content'][0]['writer'] = writerPar;print(json.dumps(jobTemplate, indent=4, sort_keys=True))def readPluginTemplate(plugin):with open(plugin, 'r') as f:return json.load(f)def isUrl(path):if not path:return Falseassert (isinstance(path, str))m = re.match(r"^http[s]?://\S+\w*", path.lower())if m:return Trueelse:return Falsedef buildStartCommand(options, args):commandMap = {}tempJVMCommand = DEFAULT_JVMif options.jvmParameters:tempJVMCommand = tempJVMCommand + " " + options.jvmParametersif options.remoteDebug:tempJVMCommand = tempJVMCommand + " " + REMOTE_DEBUG_CONFIGprint('local ip: ', getLocalIp())if options.loglevel:tempJVMCommand = tempJVMCommand + " " + ("-Dloglevel=%s" % (options.loglevel))if options.mode:commandMap["mode"] = options.mode# jobResource 可能是 URL,也可能是本地文件路径(相对,绝对)jobResource = args[0]if not isUrl(jobResource):jobResource = os.path.abspath(jobResource)if jobResource.lower().startswith("file://"):jobResource = jobResource[len("file://"):]jobParams = ("-Dlog.file.name=%s") % (jobResource[-20:].replace('/', '_').replace('.', '_'))if options.params:jobParams = jobParams + " " + options.paramsif options.jobid:commandMap["jobid"] = options.jobidcommandMap["jvm"] = tempJVMCommandcommandMap["params"] = jobParamscommandMap["job"] = jobResourcereturn Template(ENGINE_COMMAND).substitute(**commandMap)def printCopyright():print('''
DataX (%s), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.
''' % DATAX_VERSION)sys.stdout.flush()if __name__ == "__main__":printCopyright()parser = getOptionParser()options, args = parser.parse_args(sys.argv[1:])if options.reader is not None and options.writer is not None:generateJobConfigTemplate(options.reader,options.writer)sys.exit(RET_STATE['OK'])if len(args) != 1:parser.print_help()sys.exit(RET_STATE['FAIL'])startCommand = buildStartCommand(options, args)# print startCommandchild_process = subprocess.Popen(startCommand, shell=True)register_signal()(stdout, stderr) = child_process.communicate()sys.exit(child_process.returncode)

2、dxprof.py

#! /usr/bin/env python
# vim: set expandtab tabstop=4 shiftwidth=4 foldmethod=marker nu:import re
import sys
import timeREG_SQL_WAKE = re.compile(r'Begin\s+to\s+read\s+record\s+by\s+Sql', re.IGNORECASE)
REG_SQL_DONE = re.compile(r'Finished\s+read\s+record\s+by\s+Sql', re.IGNORECASE)
REG_SQL_PATH = re.compile(r'from\s+(\w+)(\s+where|\s*$)', re.IGNORECASE)
REG_SQL_JDBC = re.compile(r'jdbcUrl:\s*\[(.+?)\]', re.IGNORECASE)
REG_SQL_UUID = re.compile(r'(\d+\-)+reader')
REG_COMMIT_UUID = re.compile(r'(\d+\-)+writer')
REG_COMMIT_WAKE = re.compile(r'begin\s+to\s+commit\s+blocks', re.IGNORECASE)
REG_COMMIT_DONE = re.compile(r'commit\s+blocks\s+ok', re.IGNORECASE)# {{{ function parse_timestamp() #
def parse_timestamp(line):try:ts = int(time.mktime(time.strptime(line[0:19], '%Y-%m-%d %H:%M:%S')))except:ts = 0return ts# }}} ## {{{ function parse_query_host() #
def parse_query_host(line):ori = REG_SQL_JDBC.search(line)if (not ori):return ''ori = ori.group(1).split('?')[0]off = ori.find('@')if (off > -1):ori = ori[off+1:len(ori)]else:off = ori.find('//')if (off > -1):ori = ori[off+2:len(ori)]return ori.lower()
# }}} ## {{{ function parse_query_table() #
def parse_query_table(line):ori = REG_SQL_PATH.search(line)return (ori and ori.group(1).lower()) or ''
# }}} ## {{{ function parse_reader_task() #
def parse_task(fname):global LAST_SQL_UUIDglobal LAST_COMMIT_UUIDglobal DATAX_JOBDICTglobal DATAX_JOBDICT_COMMITglobal UNIXTIMELAST_SQL_UUID = ''DATAX_JOBDICT = {}LAST_COMMIT_UUID = ''DATAX_JOBDICT_COMMIT = {}UNIXTIME = int(time.time())with open(fname, 'r') as f:for line in f.readlines():line = line.strip()if (LAST_SQL_UUID and (LAST_SQL_UUID in DATAX_JOBDICT)):DATAX_JOBDICT[LAST_SQL_UUID]['host'] = parse_query_host(line)LAST_SQL_UUID = ''if line.find('CommonRdbmsReader$Task') > 0:parse_read_task(line)elif line.find('commit blocks') > 0:parse_write_task(line)else:continue
# }}} ## {{{ function parse_read_task() #
def parse_read_task(line):ser = REG_SQL_UUID.search(line)if not ser:returnLAST_SQL_UUID = ser.group()if REG_SQL_WAKE.search(line):DATAX_JOBDICT[LAST_SQL_UUID] = {'stat' : 'R','wake' : parse_timestamp(line),'done' : UNIXTIME,'host' : parse_query_host(line),'path' : parse_query_table(line)}elif ((LAST_SQL_UUID in DATAX_JOBDICT) and REG_SQL_DONE.search(line)):DATAX_JOBDICT[LAST_SQL_UUID]['stat'] = 'D'DATAX_JOBDICT[LAST_SQL_UUID]['done'] = parse_timestamp(line)
# }}} ## {{{ function parse_write_task() #
def parse_write_task(line):ser = REG_COMMIT_UUID.search(line)if not ser:returnLAST_COMMIT_UUID = ser.group()if REG_COMMIT_WAKE.search(line):DATAX_JOBDICT_COMMIT[LAST_COMMIT_UUID] = {'stat' : 'R','wake' : parse_timestamp(line),'done' : UNIXTIME,}elif ((LAST_COMMIT_UUID in DATAX_JOBDICT_COMMIT) and REG_COMMIT_DONE.search(line)):DATAX_JOBDICT_COMMIT[LAST_COMMIT_UUID]['stat'] = 'D'DATAX_JOBDICT_COMMIT[LAST_COMMIT_UUID]['done'] = parse_timestamp(line)
# }}} ## {{{ function result_analyse() #
def result_analyse():def compare(a, b):return b['cost'] - a['cost']tasklist = []hostsmap = {}statvars = {'sum' : 0, 'cnt' : 0, 'svr' : 0, 'max' : 0, 'min' : int(time.time())}tasklist_commit = []statvars_commit = {'sum' : 0, 'cnt' : 0}for idx in DATAX_JOBDICT:item = DATAX_JOBDICT[idx]item['uuid'] = idx;item['cost'] = item['done'] - item['wake']tasklist.append(item);if (not (item['host'] in hostsmap)):hostsmap[item['host']] = 1statvars['svr'] += 1if (item['cost'] > -1 and item['cost'] < 864000):statvars['sum'] += item['cost']statvars['cnt'] += 1statvars['max'] = max(statvars['max'], item['done'])statvars['min'] = min(statvars['min'], item['wake'])for idx in DATAX_JOBDICT_COMMIT:itemc = DATAX_JOBDICT_COMMIT[idx]itemc['uuid'] = idxitemc['cost'] = itemc['done'] - itemc['wake']tasklist_commit.append(itemc)if (itemc['cost'] > -1 and itemc['cost'] < 864000):statvars_commit['sum'] += itemc['cost']statvars_commit['cnt'] += 1ttl = (statvars['max'] - statvars['min']) or 1idx = float(statvars['cnt']) / (statvars['sum'] or ttl)tasklist.sort(compare)for item in tasklist:print('%s\t%s.%s\t%s\t%s\t% 4d\t% 2.1f%%\t% .2f' %(item['stat'], item['host'], item['path'],time.strftime('%H:%M:%S', time.localtime(item['wake'])),(('D' == item['stat']) and time.strftime('%H:%M:%S', time.localtime(item['done']))) or '--',item['cost'], 100 * item['cost'] / ttl, idx * item['cost']))if (not len(tasklist) or not statvars['cnt']):returnprint('\n--- DataX Profiling Statistics ---')print('%d task(s) on %d server(s), Total elapsed %d second(s), %.2f second(s) per task in average' %(statvars['cnt'],statvars['svr'], statvars['sum'], float(statvars['sum']) / statvars['cnt']))print('Actually cost %d second(s) (%s - %s), task concurrency: %.2f, tilt index: %.2f' %(ttl,time.strftime('%H:%M:%S', time.localtime(statvars['min'])),time.strftime('%H:%M:%S', time.localtime(statvars['max'])),float(statvars['sum']) / ttl, idx * tasklist[0]['cost']))idx_commit = float(statvars_commit['cnt']) / (statvars_commit['sum'] or ttl)tasklist_commit.sort(compare)print('%d task(s) done odps comit, Total elapsed %d second(s), %.2f second(s) per task in average, tilt index: %.2f' % (statvars_commit['cnt'],statvars_commit['sum'], float(statvars_commit['sum']) / statvars_commit['cnt'],idx_commit * tasklist_commit[0]['cost']))# }}} #if (len(sys.argv) < 2):print("Usage: %s filename" %(sys.argv[0]))quit(1)
else:parse_task(sys.argv[1])result_analyse()

3、perftrace.py

#!/usr/bin/env python
# -*- coding:utf-8 -*-"""Life's short, Python more.
"""import re
import os
import sys
import json
import uuid
import signal
import time
import subprocess
from optparse import OptionParser
reload(sys)
sys.setdefaultencoding('utf8')##begin cli & help logic
def getOptionParser():usage = getUsage()parser = OptionParser(usage = usage)#rdbms reader and writerparser.add_option('-r', '--reader', action='store', dest='reader', help='trace datasource read performance with specified !json! string')parser.add_option('-w', '--writer', action='store', dest='writer', help='trace datasource write performance with specified !json! string')parser.add_option('-c', '--channel',  action='store', dest='channel', default='1', help='the number of concurrent sync thread, the default is 1')parser.add_option('-f', '--file',   action='store', help='existing datax configuration file, include reader and writer params')parser.add_option('-t', '--type',   action='store', default='reader', help='trace which side\'s performance, cooperate with -f --file params, need to be reader or writer')parser.add_option('-d', '--delete',   action='store', default='true', help='delete temporary files, the default value is true')#parser.add_option('-h', '--help',   action='store', default='true', help='print usage information')return parserdef getUsage():return '''
The following params are available for -r --reader:[these params is for rdbms reader, used to trace rdbms read performance, it's like datax's key]*datasourceType: datasource type, may be mysql|drds|oracle|ads|sqlserver|postgresql|db2 etc...*jdbcUrl:        datasource jdbc connection string, mysql as a example: jdbc:mysql://ip:port/database*username:       username for datasource*password:       password for datasource*table:          table name for read datacolumn:          column to be read, the default value is ['*']splitPk:         the splitPk column of rdbms tablewhere:           limit the scope of the performance data setfetchSize:       how many rows to be fetched at each communicate[these params is for stream reader, used to trace rdbms write performance]reader-sliceRecordCount:  how man test data to mock(each channel), the default value is 10000reader-column          :  stream reader while generate test data(type supports: string|long|date|double|bool|bytes; support constant value and random function),demo: [{"type":"string","value":"abc"},{"type":"string","random":"10,20"}]
The following params are available for -w --writer:[these params is for rdbms writer, used to trace rdbms write performance, it's like datax's key]datasourceType:  datasource type, may be mysql|drds|oracle|ads|sqlserver|postgresql|db2|ads etc...*jdbcUrl:        datasource jdbc connection string, mysql as a example: jdbc:mysql://ip:port/database*username:       username for datasource*password:       password for datasource*table:          table name for write datacolumn:          column to be writed, the default value is ['*']batchSize:       how many rows to be storeed at each communicate, the default value is 512preSql:          prepare sql to be executed before write data, the default value is ''postSql:         post sql to be executed end of write data, the default value is ''url:             required for ads, pattern is ip:portschme:           required for ads, ads database name[these params is for stream writer, used to trace rdbms read performance]writer-print:           true means print data read from source datasource, the default value is false
The following params are available global control:-c --channel:    the number of concurrent tasks, the default value is 1-f --file:       existing completely dataX configuration file path-t --type:       test read or write performance for a datasource, couble be reader or writer, in collaboration with -f --file-h --help:       print help message
some demo:
perftrace.py --channel=10 --reader='{"jdbcUrl":"jdbc:mysql://127.0.0.1:3306/database", "username":"", "password":"", "table": "", "where":"", "splitPk":"", "writer-print":"false"}'
perftrace.py --channel=10 --writer='{"jdbcUrl":"jdbc:mysql://127.0.0.1:3306/database", "username":"", "password":"", "table": "", "reader-sliceRecordCount": "10000", "reader-column": [{"type":"string","value":"abc"},{"type":"string","random":"10,20"}]}'
perftrace.py --file=/tmp/datax.job.json --type=reader --reader='{"writer-print": "false"}'
perftrace.py --file=/tmp/datax.job.json --type=writer --writer='{"reader-sliceRecordCount": "10000", "reader-column": [{"type":"string","value":"abc"},{"type":"string","random":"10,20"}]}'
some example jdbc url pattern, may help:
jdbc:oracle:thin:@ip:port:database
jdbc:mysql://ip:port/database
jdbc:sqlserver://ip:port;DatabaseName=database
jdbc:postgresql://ip:port/database
warn: ads url pattern is ip:port
warn: test write performance will write data into your table, you can use a temporary table just for test.
'''def printCopyright():DATAX_VERSION = 'UNKNOWN_DATAX_VERSION'print('''
DataX Util Tools (%s), From Alibaba !
Copyright (C) 2010-2016, Alibaba Group. All Rights Reserved.''' % DATAX_VERSION)sys.stdout.flush()def yesNoChoice():yes = set(['yes','y', 'ye', ''])no = set(['no','n'])choice = raw_input().lower()if choice in yes:return Trueelif choice in no:return Falseelse:sys.stdout.write("Please respond with 'yes' or 'no'")
##end cli & help logic##begin process logic
def suicide(signum, e):global childProcessprint >> sys.stderr, "[Error] Receive unexpected signal %d, starts to suicide." % (signum)if childProcess:childProcess.send_signal(signal.SIGQUIT)time.sleep(1)childProcess.kill()print >> sys.stderr, "DataX Process was killed ! you did ?"sys.exit(-1)def registerSignal():global childProcesssignal.signal(2, suicide)signal.signal(3, suicide)signal.signal(15, suicide)def fork(command, isShell=False):global childProcesschildProcess = subprocess.Popen(command, shell = isShell)registerSignal()(stdout, stderr) = childProcess.communicate()#阻塞直到子进程结束childProcess.wait()return childProcess.returncode
##end process logic##begin datax json generate logic
#warn: if not '': -> true;   if not None: -> true
def notNone(obj, context):if not obj:raise Exception("Configuration property [%s] could not be blank!" % (context))def attributeNotNone(obj, attributes):for key in attributes:notNone(obj.get(key), key)def isBlank(value):if value is None or len(value.strip()) == 0:return Truereturn Falsedef parsePluginName(jdbcUrl, pluginType):import re#warn: drdsname = 'pluginName'mysqlRegex = re.compile('jdbc:(mysql)://.*')if (mysqlRegex.match(jdbcUrl)):name = 'mysql'postgresqlRegex = re.compile('jdbc:(postgresql)://.*')if (postgresqlRegex.match(jdbcUrl)):name = 'postgresql'oracleRegex = re.compile('jdbc:(oracle):.*')if (oracleRegex.match(jdbcUrl)):name = 'oracle'sqlserverRegex = re.compile('jdbc:(sqlserver)://.*')if (sqlserverRegex.match(jdbcUrl)):name = 'sqlserver'db2Regex = re.compile('jdbc:(db2)://.*')if (db2Regex.match(jdbcUrl)):name = 'db2'return "%s%s" % (name, pluginType)def renderDataXJson(paramsDict, readerOrWriter = 'reader', channel = 1):dataxTemplate = {"job": {"setting": {"speed": {"channel": 1}},"content": [{"reader": {"name": "","parameter": {"username": "","password": "","sliceRecordCount": "10000","column": ["*"],"connection": [{"table": [],"jdbcUrl": []}]}},"writer": {"name": "","parameter": {"print": "false","connection": [{"table": [],"jdbcUrl": ''}]}}}]}}dataxTemplate['job']['setting']['speed']['channel'] = channeldataxTemplateContent = dataxTemplate['job']['content'][0]pluginName = ''if paramsDict.get('datasourceType'):pluginName = '%s%s' % (paramsDict['datasourceType'], readerOrWriter)elif paramsDict.get('jdbcUrl'):pluginName = parsePluginName(paramsDict['jdbcUrl'], readerOrWriter)elif paramsDict.get('url'):pluginName = 'adswriter'theOtherSide = 'writer' if readerOrWriter == 'reader' else 'reader'dataxPluginParamsContent = dataxTemplateContent.get(readerOrWriter).get('parameter')dataxPluginParamsContent.update(paramsDict)dataxPluginParamsContentOtherSide = dataxTemplateContent.get(theOtherSide).get('parameter')if readerOrWriter == 'reader':dataxTemplateContent.get('reader')['name'] = pluginNamedataxTemplateContent.get('writer')['name'] = 'streamwriter'if paramsDict.get('writer-print'):dataxPluginParamsContentOtherSide['print'] = paramsDict['writer-print']del dataxPluginParamsContent['writer-print']del dataxPluginParamsContentOtherSide['connection']if readerOrWriter == 'writer':dataxTemplateContent.get('reader')['name'] = 'streamreader'dataxTemplateContent.get('writer')['name'] = pluginNameif paramsDict.get('reader-column'):dataxPluginParamsContentOtherSide['column'] = paramsDict['reader-column']del dataxPluginParamsContent['reader-column']if paramsDict.get('reader-sliceRecordCount'):dataxPluginParamsContentOtherSide['sliceRecordCount'] = paramsDict['reader-sliceRecordCount']del dataxPluginParamsContent['reader-sliceRecordCount']del dataxPluginParamsContentOtherSide['connection']if paramsDict.get('jdbcUrl'):if readerOrWriter == 'reader':dataxPluginParamsContent['connection'][0]['jdbcUrl'].append(paramsDict['jdbcUrl'])else:dataxPluginParamsContent['connection'][0]['jdbcUrl'] = paramsDict['jdbcUrl']if paramsDict.get('table'):dataxPluginParamsContent['connection'][0]['table'].append(paramsDict['table'])traceJobJson = json.dumps(dataxTemplate, indent = 4)return traceJobJsondef isUrl(path):if not path:return Falseif not isinstance(path, str):raise Exception('Configuration file path required for the string, you configure is:%s' % path)m = re.match(r"^http[s]?://\S+\w*", path.lower())if m:return Trueelse:return Falsedef readJobJsonFromLocal(jobConfigPath):jobConfigContent = NonejobConfigPath = os.path.abspath(jobConfigPath)file = open(jobConfigPath)try:jobConfigContent = file.read()finally:file.close()if not jobConfigContent:raise Exception("Your job configuration file read the result is empty, please check the configuration is legal, path: [%s]\nconfiguration:\n%s" % (jobConfigPath, str(jobConfigContent)))return jobConfigContentdef readJobJsonFromRemote(jobConfigPath):import urllibconn = urllib.urlopen(jobConfigPath)jobJson = conn.read()return jobJsondef parseJson(strConfig, context):try:return json.loads(strConfig)except Exception as e:import tracebacktraceback.print_exc()sys.stdout.flush()print >> sys.stderr, '%s %s need in line with json syntax' % (context, strConfig)sys.exit(-1)def convert(options, args):traceJobJson = ''if options.file:if isUrl(options.file):traceJobJson = readJobJsonFromRemote(options.file)else:traceJobJson = readJobJsonFromLocal(options.file)traceJobDict = parseJson(traceJobJson, '%s content' % options.file)attributeNotNone(traceJobDict, ['job'])attributeNotNone(traceJobDict['job'], ['content'])attributeNotNone(traceJobDict['job']['content'][0], ['reader', 'writer'])attributeNotNone(traceJobDict['job']['content'][0]['reader'], ['name', 'parameter'])attributeNotNone(traceJobDict['job']['content'][0]['writer'], ['name', 'parameter'])if options.type == 'reader':traceJobDict['job']['content'][0]['writer']['name'] = 'streamwriter'if options.reader:traceReaderDict = parseJson(options.reader, 'reader config')if traceReaderDict.get('writer-print') is not None:traceJobDict['job']['content'][0]['writer']['parameter']['print'] = traceReaderDict.get('writer-print')else:traceJobDict['job']['content'][0]['writer']['parameter']['print'] = 'false'else:traceJobDict['job']['content'][0]['writer']['parameter']['print'] = 'false'elif options.type == 'writer':traceJobDict['job']['content'][0]['reader']['name'] = 'streamreader'if options.writer:traceWriterDict = parseJson(options.writer, 'writer config')if traceWriterDict.get('reader-column'):traceJobDict['job']['content'][0]['reader']['parameter']['column'] = traceWriterDict['reader-column']if traceWriterDict.get('reader-sliceRecordCount'):traceJobDict['job']['content'][0]['reader']['parameter']['sliceRecordCount'] = traceWriterDict['reader-sliceRecordCount']else:columnSize = len(traceJobDict['job']['content'][0]['writer']['parameter']['column'])streamReaderColumn = []for i in range(columnSize):streamReaderColumn.append({"type": "long", "random": "2,10"})traceJobDict['job']['content'][0]['reader']['parameter']['column'] = streamReaderColumntraceJobDict['job']['content'][0]['reader']['parameter']['sliceRecordCount'] = 10000else:pass#do nothingreturn json.dumps(traceJobDict, indent = 4)elif options.reader:traceReaderDict = parseJson(options.reader, 'reader config')return renderDataXJson(traceReaderDict, 'reader', options.channel)elif options.writer:traceWriterDict = parseJson(options.writer, 'writer config')return renderDataXJson(traceWriterDict, 'writer', options.channel)else:print(getUsage())sys.exit(-1)#dataxParams = {}#for opt, value in options.__dict__.items():#    dataxParams[opt] = value
##end datax json generate logicif __name__ == "__main__":printCopyright()parser = getOptionParser()options, args = parser.parse_args(sys.argv[1:])#print options, argsdataxTraceJobJson = convert(options, args)#由MAC地址、当前时间戳、随机数生成,可以保证全球范围内的唯一性dataxJobPath = os.path.join(os.getcwd(), "perftrace-" + str(uuid.uuid1()))jobConfigOk = Trueif os.path.exists(dataxJobPath):print("file already exists, truncate and rewrite it? %s" % dataxJobPath)if yesNoChoice():jobConfigOk = Trueelse:print("exit failed, because of file conflict")sys.exit(-1)fileWriter = open(dataxJobPath, 'w')fileWriter.write(dataxTraceJobJson)fileWriter.close()print("trace environments:")print("dataxJobPath:  %s" % dataxJobPath)dataxHomePath = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))print("dataxHomePath: %s" % dataxHomePath)dataxCommand = "%s %s" % (os.path.join(dataxHomePath, "bin", "datax.py"), dataxJobPath)print("dataxCommand:  %s" % dataxCommand)returncode = fork(dataxCommand, True)if options.delete == 'true':os.remove(dataxJobPath)sys.exit(returncode)

ok,把这三个文件给换成python3版本,就可以正常使用了。

相关文章:

DataX的python3使用

datax这东西本身是python2写的&#xff0c;这导致python3&#xff0c;就各种语法报错&#xff0c;问题是&#xff0c;现在的工程都是python3搞的&#xff0c;这就很难受.... 网上找到一篇帖子&#xff0c;可以解决这个问题&#xff1a; 原帖&#xff1a;python3执行datax报错…...

部署项目至服务器:响应时间太长,无法访问此页面?

在我们部署项目到服务器上的时候&#xff0c;一顿操作猛如虎&#xff0c;打开页面..... 这里记录一下这种情况是怎么回事。一般就是服务器上的安全组没有放行端口。 因为我是用宝塔进行项目部署的。所以遇到这种情况&#xff0c;要去操作两边&#xff08;宝塔and服务器所属平台…...

Map<String,Object>中Fastjson提取entrys对应的值

今天在处理接口数据时&#xff0c;需要解析出对方传入的json数据&#xff0c;并需要取出其中一个字段的值来判断&#xff0c;记录下我的步骤&#xff0c;提供参考&#xff1a; 1.json数据准备 {"hrOrgUnit": "00000000-0000-0000-0000-000000000000CCE7AED4&q…...

【毕业论文格式】word分页符后的标题段前间距消失

文章目录 【问题描述】 分页符之后的段落开头&#xff0c;明明设置了标题有段前段后间距&#xff0c;但是没有显示间距&#xff1a; 【解决办法】 选中标题&#xff0c;选择边框 3. 选择段前间距&#xff0c;1~31磅的一个数 结果...

Android,Java,Kotlin 确保线程顺序执行的多种实现方式

在多线程编程中&#xff0c;有时需要确保一个线程必须等待另一个线程执行完毕后再执行。本文将介绍几种常见的方法来实现这一需求&#xff0c;并提供详细的代码示例。 1. 使用 Thread.join() Thread.join() 是最简单直接的方法&#xff0c;它会让当前线程等待目标线程执行完毕…...

AWK 入门教程:强大的文本处理工具

AWK 是一种强大的文本处理工具&#xff0c;广泛用于 Linux/Unix 系统中对文本文件或数据流进行操作。它能够基于条件筛选、统计字段、重新排列数据等。主要特点包括&#xff1a; 2. AWK 的基本语法 2.1 AWK 程序的结构 AWK 程序的结构: awk pattern { action } file 2.2 常…...

【Linux】在VMWare中安装Ubuntu操作系统(2025最新_Ubuntu 24.04.2)#VMware安装Ubuntu实战分享#

今天田辛老师为大家带来一篇关于在VMWare虚拟机上安装Ubuntu系统的详细教程。无论是学习、开发还是测试&#xff0c;虚拟机都是一个非常实用的工具&#xff0c;它允许我们在同一台物理机上运行多个操作系统。Ubuntu作为一款开源、免费且用户友好的Linux发行版&#xff0c;深受广…...

基于yolov8+streamlit实现目标检测系统带漂亮登录界面

【项目介绍】 基于YOLOv8和Streamlit实现的目标检测系统&#xff0c;结合了YOLOv8先进的目标检测能力与Streamlit快速构建交互式Web应用的优势&#xff0c;为用户提供了一个功能强大且操作简便的目标检测平台。该系统不仅具备高精度的目标检测功能&#xff0c;还拥有一个漂亮且…...

安装 Powerlevel10k 及 Oh My Zsh 的使用

1. 简介 Powerlevel10k 是 Oh My Zsh 最流行的终端主题&#xff0c;它不仅美观&#xff0c;还提供 Git 状态显示、命令执行时间、网络状态、Python 虚拟环境指示等 实用功能。相比其他主题&#xff0c;Powerlevel10k 速度更快、可定制性更强。 本教程将详细介绍如何安装 Powe…...

虚拟机下ubuntu进不了图形界面

6.844618] piix4_smbus 0000:07.3: SMBus Host ContrFoller not enabled! 7.859836] sd 2:0:0:0:0: [sda] Assuming drive cache: wirite through /dev/sda1: clean, 200424/1966080 files, 4053235/7864064 blocks ubuntu启动时&#xff0c;卡在上面输出位置 当前遇到的原因…...

从 root 一滴水看 Spring Data JPA 的汪洋大海

&#x1f525; 从 root 一滴水看 Spring Data JPA 的汪洋大海 &#x1f30a; 在 Spring Data JPA 的世界里&#xff0c;Specification 是个让人又爱又恨的家伙 &#x1f4a1;。它能帮你动态构建查询&#xff0c;但那个神秘的 Root<T> root 却总让人摸不着头脑&#xff1…...

二进制安装指定版本的MariaDBv10.11.6

一、官网下载mariadb安装包 Download MariaDB Server - MariaDB.org 找到对应的版本 下载安装包后上传到服务器这里不再赘述。 二、安装二进制包 1、解压安装包 2、查看安装包内的安装提示文档根据提示文档进行安装 # 解压安装包 tar xf mariadb-10.11.6-linux-systemd-x8…...

日志Python安全之SSTI——Flask/Jinja2

ssti的概念和模板引擎介绍等基础知识前面已经学过了&#xff0c;接下来直接进入正题 先了解flask/jinja2&#xff1a; flask&#xff1a; 用python编写的一个框架&#xff0c;集成 Jinja2 模板引擎&#xff08;用于动态生成 HTML 内容&#xff09;。 Flask 的核心组件&…...

梯度下降法以及随机梯度下降法

梯度下降法就是在更新weight的时候&#xff0c;向函数值下降的最快方向进行更新&#xff0c;具体的原理我就不再写了&#xff0c;就是一个求偏导的过程&#xff0c;有高数基础的都能够很快的理解过程。我在我的github里面会一直更新自己学习pytorch的过程&#xff0c;地址为&am…...

从零基础到能独立设计单片机产品,一般需要经历哪些学习阶段?

相信很多人&#xff0c;内心都有“钢铁侠”的幻想&#xff0c;成为能写程序&#xff0c;能设计硬件&#xff0c;能设计结构&#xff0c;能焊接的全能型人才。 上次徐工问我&#xff0c;如果你财富自由了&#xff0c;想去做啥&#xff1f; 我说出来&#xff0c;可能大家都不信&a…...

ORACLE 19.8版本遭遇ORA-600 [kqrHashTableRemove: X lock].宕机的问题分析

客户反馈单机环境的一个数据库半夜突然宕机了&#xff0c;这是一个比较重要的系统&#xff1b;接到通知后分析对应日志&#xff0c;发现ALERT日志中有明显报错&#xff1a;ORA-600 [kqrHashTableRemove: X lock]. 600报错我简单的分为2类&#xff0c;一类不会导致宕机&#x…...

OpenCV实现图像分割与无缝合并

一、图像分割核心方法 1、阈值分割 #include <opencv2/opencv.hpp> using namespace cv; int main() {Mat img imread("input.jpg", IMREAD_GRAYSCALE);Mat binary;threshold(img, binary, 127, 255, THRESH_BINARY); // 固定阈值分割imwrite("binary.…...

《AI浪潮中的璀璨新星:Meta Llama、Ollama与DeepSeek的深度剖析》:此文为AI自动生成

《AI浪潮中的璀璨新星&#xff1a;Meta Llama、Ollama与DeepSeek的深度剖析》&#xff1a;此文为AI自动生成 引言&#xff1a;AI 大模型的群雄逐鹿时代 在科技飞速发展的当下&#xff0c;AI 大模型领域已成为全球瞩目的焦点&#xff0c;竞争激烈程度堪称白热化。从 OpenAI 推出…...

如何搭建个人静态住宅IP:从零开始

你好&#xff01;今天我们将一起探索如何从头开始搭建个人静态住宅IP。无论您是为了远程办公、在线教育还是游戏加速&#xff0c;静态住宅IP都能带给您更稳定的网络体验。 一、准备阶段 1. 明确需求 首先&#xff0c;您需要清楚自己为什么需要静态住宅IP。可能是为了实现远程…...

机器人触觉的意义

机器人触觉的重要性 触觉在机器人领域至关重要&#xff0c;尤其是在自主操作、精细操控、人机交互等方面。虽然视觉和语音技术已高度发展&#xff0c;但机器人在现实世界中的操作仍然受限&#xff0c;因为&#xff1a; 视觉有局限性&#xff1a;仅凭视觉&#xff0c;机器人难…...

【赵渝强老师】达梦数据库的目录结构

达梦数据库安装成功后&#xff0c;通过使用Linux的tree命令可以非常方便地查看DM 8的目录结构。 tree -L 1 -d /home/dmdba/dmdbms#输出的信息如下&#xff1a; /home/dmdba/dmdbms ├── bin 存放DM数据库的可执行文件&#xff0c;例如disql命令等。 ├── bin2 ├── d…...

centos7使用gpu加速的MinerU

https://mineru.readthedocs.io/zh-cn/latest/user_guide/install/boost_with_cuda.html 由于官方只有ubantu的安装教程&#xff0c;并没有基于centos7的&#xff0c;故需要自己修改命令安装并使用。 在运行此 Docker 容器之前&#xff0c;您可以使用以下命令检查您的设备是否…...

反射、反射调用以及修改成员变量,成员方法,构造函数、反射的应用

DAY11.2 Java核心基础 反射&#xff08;第二弹&#xff09; 第一弹请访问链接&#xff1a; 反射&#xff08;第一篇&#xff09; getMethod(String name, Class… parameterTypes)getMethods()getDeclaredMethod(String name,Class… parameterTypes)getDeclaredMethods() …...

对Spring的每种事务传播级别的应用场景和失效场景

好的&#xff0c;下面针对Spring的每种事务传播级别&#xff0c;详细说明其应用场景和失效场景&#xff0c;帮助更好地理解它们的实际使用。 1. REQUIRED&#xff08;默认&#xff09; 应用场景&#xff1a; 大多数业务方法&#xff0c;尤其是需要事务支持的操作。例如&#x…...

DeepSeek linux服务器(CentOS)部署命令笔记

Linux&#xff08;CentOS&#xff09;FinalShellOllama远程访问&#xff0c;本地部署deepseek 自备CentOS服务器&#xff0c;并且已经使用FinalShell连接到服务器 一、准备工作 1.更新服务器 apt-get update-y 2.下载Ollama curl -fsSL https://ollama.com/install.sh | …...

阿里巴巴发布 R1-Omni:首个基于 RLVR 的全模态大语言模型,用于情感识别

每周跟踪AI热点新闻动向和震撼发展 想要探索生成式人工智能的前沿进展吗&#xff1f;订阅我们的简报&#xff0c;深入解析最新的技术突破、实际应用案例和未来的趋势。与全球数同行一同&#xff0c;从行业内部的深度分析和实用指南中受益。不要错过这个机会&#xff0c;成为AI领…...

OpenCV 拆分、合并图像通道方法及复现

视频讲解 OpenCV 拆分、合并图像通道方法及复现 环境准备&#xff1a;安装 OpenCV 库&#xff08;pip install opencv-python&#xff09; 内容&#xff1a; 1. 读取任意图片&#xff08;支持 jpg/png 等格式&#xff09; 2. 使用 split () 函数拆解成 3 个单色通道&#xf…...

Node 使用 SSE 结合redis 推送数据(echarts 图表实时更新)

1、实时通信有哪些实现方式&#xff1f; 特性轮询&#xff08;Polling&#xff09;WebSocketSSE (Server-Sent Events)通信方向单向&#xff08;客户端 → 服务端&#xff09;双向&#xff08;客户端 ↔ 服务端&#xff09;单向&#xff08;服务端 → 客户端&#xff09;连接方…...

提升 Instagram 账号安全性:防止数据泄露的步骤

提升 Instagram 账号安全性&#xff1a;防止数据泄露的步骤 在这个数字化时代&#xff0c;Instagram 不仅是我们分享生活点滴的平台&#xff0c;也是个人信息交换的场所。随之而来的&#xff0c;是数据泄露的风险。保护好自己的 Instagram 账号&#xff0c;防止个人信息外泄&a…...

实现“XXX一张图“进行环境设施设备可视化管理

实现“电网一张图”、“铁路一张图”、“水库一张图”、“森林一张图”等概念,本质上是将某一领域的空间数据、设施设备、运行状态等信息整合到一个统一的数字化平台上,实现全域可视化、智能化管理和协同运营。这种“一张图”模式依赖于地理信息系统(GIS)、物联网(IoT)、…...

RTDETR融合[CVPR2025]ARConv中的自适应矩阵卷积

RT-DETR使用教程&#xff1a; RT-DETR使用教程 RT-DETR改进汇总贴&#xff1a;RT-DETR更新汇总贴 《Adaptive Rectangular Convolution for Remote Sensing Pansharpening》 一、 模块介绍 论文链接&#xff1a;https://arxiv.org/pdf/2503.00467 代码链接&#xff1a;https:/…...

深度解读DeepSeek部署使用安全(48页PPT)(文末有下载方式)

深度解读DeepSeek&#xff1a;部署、使用与安全 详细资料请看本解读文章的最后内容。 引言 DeepSeek作为一款先进的人工智能模型&#xff0c;其部署、使用与安全性是用户最为关注的三大核心问题。本文将从本地化部署、使用方法与技巧、以及安全性三个方面&#xff0c;对Deep…...

微服务无状态服务设计

微服务无状态服务设计是构建高可用、高扩展性系统的核心方法。 一、核心设计原则 请求独立性 每个请求必须携带完整的上下文信息&#xff0c;服务不依赖本地存储的会话或用户数据。例如用户认证通过JWT传递所有必要信息&#xff0c;而非依赖服务端Session。 状态外置化 将会话…...

Android 高版本 DownloadManager 封装工具类,支持 APK 断点续传与自动安装

主要有以下优点 兼容高版本 Android&#xff1a;适配 Android 10 及以上版本的存储权限和安装权限。断点续传&#xff1a;支持从断点继续下载。下载进度监听&#xff1a;实时获取下载进度并回调。错误处理&#xff1a;处理下载失败、网络异常等情况。自动安装 APK&#xff1a;…...

Apache Hudi 性能测试报告

一、测试背景 数据湖作为一个集中化的数据存储仓库,支持结构化、半结构化以及非结构化等多种数据格式,数据来源包含数据库数据、增量数据、日志数据以及数仓上的存量数据等。数据湖能够将这些不同来源、不同格式的数据集中存储和管理在高性价比的分布式存储系统中,对外提供…...

Flask使用Blueprint注册管理路由

在 Flask 中&#xff0c;可以使用 蓝图&#xff08;Blueprint&#xff09; 来组织和注册路由&#xff0c;从而让代码更加模块化和易于维护。以下是完整的使用方法&#xff1a; 1. 创建 Flask 项目结构 建议的项目目录结构如下&#xff1a; my_flask_app/ │── app.py …...

LuaJIT 学习(3)—— ffi.* API 函数

文章目录 GlossaryDeclaring and Accessing External Symbolsffi.cdef(def)ffi.Cclib ffi.load(name [,global])例子&#xff1a;ffi.load 函数的使用 Creating cdata Objectscdata ffi.new(ct [,nelem] [,init...]) cdata ctype([nelem,] [init...])例子&#xff1a;匿名 C…...

[资源分享]-web3/区块链/学习路线/资料/找工作方式/水龙头

记录个人学习web3整理的资料 后续如果有 了解/入坑 打算, 提前收藏一下. 1. 学习路线 登链社区-学习路线图 2. 学习资料 国内成系统的资料比较少,我整理的网盘的,关注私信我,资料互相学习 前言 | 区块链技术指南 学习web3-僵尸小游戏 web3.js文档 ethers.js官方文档 Hardhat文…...

Django-ORM-prefetch_related

Django-ORM-prefetch_related 模型定义N1 查询问题示例 使用 prefetch_related 优化查询处理更复杂的查询示例&#xff1a;预取特定条件的书籍示例&#xff1a;预取多个关联字段 性能比较注意事项总结 通过 Author 和 Books 两个模型来理解 Django 的 prefetch_related 方法。 …...

MySQL 批量插入 vs 逐条插

MySQL 插入数据&#xff1a;批量插入 vs 逐条插入&#xff0c;哪个更快&#xff1f; 在 MySQL 中&#xff0c;插入数据有两种常见方式&#xff1a; 批量插入&#xff1a;一条 SQL 插入多条数据。逐条插入&#xff1a;每次插入一条数据。 这两种方式有什么区别&#xff1f;哪…...

Linux centos 7 grub引导故障恢复

CentOS 7误删GRUB2可以通过以下步骤恢复&#xff1a; 进入救援模式 1. 插入CentOS 7安装光盘&#xff0c;重启系统。在开机时按BIOS设置对应的按键&#xff08;通常是F2等&#xff09;&#xff0c;将启动顺序调整为CD - ROM优先。 2. 系统从光盘启动后&#xff0c;选择“…...

要在Unreal Engine 5(UE5)中实现角色打击怪物并让怪物做出受击反应,

UE5系列文章目录 文章目录 UE5系列文章目录前言一、实现思路二、最终效果 前言 ue5角色受击没有播放受击动画&#xff0c;主角达到怪物身上没有反应 一、实现思路 要在Unreal Engine 5&#xff08;UE5&#xff09;中实现角色打击怪物并让怪物做出受击反应&#xff0c;你需要…...

Navicat for Snowflake 震撼首发,激活数据仓库管理全新动能

近日&#xff0c;Navicat 家族迎来了一位全新成员 — Navicat for Snowflake。Snowflake 是一款基于云架构的现代数据仓库解决方案&#xff0c;以其弹性扩展、高性能和易用性著称。这次首发的Navicat for Snowflake 专为简化 Snowflake 数据库管理任务而精心打造。它凭借其直观…...

【redis】发布订阅

Redis的发布订阅&#xff08;Pub/Sub&#xff09;是一种基于消息多播的通信机制&#xff0c;它允许消息的**发布者&#xff08;Publisher&#xff09;向特定频道发送消息&#xff0c;而订阅者&#xff08;Subscriber&#xff09;**通过订阅频道或模式来接收消息。 其核心特点如…...

高级java每日一道面试题-2025年2月26日-框架篇[Mybatis篇]-Mybatis是如何将sql执行结果封装为目标对象并返回的?都有哪些映射形式 ?

如果有遗漏,评论区告诉我进行补充 面试官: Mybatis是如何将sql执行结果封装为目标对象并返回的?都有哪些映射形式 ? 我回答: 在Java高级面试中讨论MyBatis如何将SQL执行结果封装为目标对象并返回的过程时&#xff0c;我们可以从过程细节和映射形式两个方面来综合解答这个问…...

linux root丢失修改密

在RHEL7下重置密码 第一种方式&#xff1a;光驱进入急救模式 //做之前最好 selinuxdisabled Conntinue 然后chroot /mnt/sysimag 然后编辑/etc/shadow文件 第二种方式&#xff1a; 1&#xff1a;编辑启动菜单按e,找到linux16行&#xff0c;在行尾加入 init/bin/sh,同时在…...

OpenCV中文路径图片读写终极指南(Python实现)

文章目录 OpenCV中文路径图片读写终极指南&#xff08;Python实现&#xff09;一、问题深度解析1.1 现象观察1.2 底层原因 二、中文路径读取方案2.1 终极解决方案&#xff08;推荐&#xff09;2.2 快速修复 三、中文路径保存方案3.1 通用保存函数3.2 使用示例 四、技术原理详解…...

linux 时间同步(阿里云ntp服务器)

1、安装ntp服务 rootlocalhost ~]# yum -y install ntp 已加载插件&#xff1a;fastestmirror, langpacks Loading mirror speeds from cached hostfile* base: mirrors.nju.edu.cn* centos-sclo-rh: mirrors.nju.edu.cn* centos-sclo-sclo: mirrors.huaweicloud.com* epel: m…...

Go vs Rust vs C++ vs Python vs Java:谁主后端沉浮

一、核心性能对比(基于TechEmpower基准测试) 语言单核QPS延迟(ms)内存消耗适用场景Rust650,0000.1245MB高频交易/区块链C++720,0000.0932MB游戏服务器/实时渲染Go230,0000.45110MB微服务/API网关Java180,0001.2450MB企业ERP/银行系统Python12,0008.5220MBAI接口/快速原型技术…...

5 分钟搭建 Prometheus + Grafana 监控

一.安装 Prometheus cd /usr/local/ wget https://github.com/prometheus/prometheus/releases/download/v2.38.0/prometheus-2.38.0.linux-amd64.tar.gz tar xvf prometheus-2.38.0.linux-amd64.tar.gz ln -s prometheus-2.38.0.linux-amd64 prometheus二.安装 node_exporter…...