京公网安备 11010802034615号
经营许可证编号:京B2-20210330
python实现将html表格转换成CSV文件的方法
本文实例讲述了python实现将html表格转换成CSV文件的方法。分享给大家供大家参考。具体如下:
使用方法:python html2csv.py *.html
这段代码使用了 HTMLParser 模块
#!/usr/bin/python
# -*- coding: iso-8859-1 -*-
# Hello, this program is written in Python - http://python.org
programname = 'html2csv - version 2002-09-20 - http://sebsauvage.net'
import sys, getopt, os.path, glob, HTMLParser, re
try: import psyco ; psyco.jit() # If present, use psyco to accelerate the program
except: pass
def usage(progname):
''' Display program usage. '''
progname = os.path.split(progname)[1]
if os.path.splitext(progname)[1] in ['.py','.pyc']: progname = 'python '+progname
return '''%s
A coarse HTML tables to CSV (Comma-Separated Values) converter.
Syntax : %s source.html
Arguments : source.html is the HTML file you want to convert to CSV.
By default, the file will be converted to csv with the same
name and the csv extension (source.html -> source.csv)
You can use * and ?.
Examples : %s mypage.html
: %s *.html
This program is public domain.
Author : Sebastien SAUVAGE <sebsauvage at sebsauvage dot net>
http://sebsauvage.net
''' % (programname, progname, progname, progname)
class html2csv(HTMLParser.HTMLParser):
''' A basic parser which converts HTML tables into CSV.
Feed HTML with feed(). Get CSV with getCSV(). (See example below.)
All tables in HTML will be converted to CSV (in the order they occur
in the HTML file).
You can process very large HTML files by feeding this class with chunks
of html while getting chunks of CSV by calling getCSV().
Should handle badly formated html (missing <tr>, </tr>, </td>,
extraneous </td>, </tr>...).
This parser uses HTMLParser from the HTMLParser module,
not HTMLParser from the htmllib module.
Example: parser = html2csv()
parser.feed( open('mypage.html','rb').read() )
open('mytables.csv','w+b').write( parser.getCSV() )
This class is public domain.
Author: Sébastien SAUVAGE <sebsauvage at sebsauvage dot net>
http://sebsauvage.net
Versions:
2002-09-19 : - First version
2002-09-20 : - now uses HTMLParser.HTMLParser instead of htmllib.HTMLParser.
- now parses command-line.
To do:
- handle <PRE> tags
- convert html entities (&name; and &#ref;) to Ascii.
'''
def __init__(self):
HTMLParser.HTMLParser.__init__(self)
self.CSV = '' # The CSV data
self.CSVrow = '' # The current CSV row beeing constructed from HTML
self.inTD = 0 # Used to track if we are inside or outside a <TD>...</TD> tag.
self.inTR = 0 # Used to track if we are inside or outside a <TR>...</TR> tag.
self.re_multiplespaces = re.compile('\s+') # regular expression used to remove spaces in excess
self.rowCount = 0 # CSV output line counter.
def handle_starttag(self, tag, attrs):
if tag == 'tr': self.start_tr()
elif tag == 'td': self.start_td()
def handle_endtag(self, tag):
if tag == 'tr': self.end_tr()
elif tag == 'td': self.end_td()
def start_tr(self):
if self.inTR: self.end_tr() # <TR> implies </TR>
self.inTR = 1
def end_tr(self):
if self.inTD: self.end_td() # </TR> implies </TD>
self.inTR = 0
if len(self.CSVrow) > 0:
self.CSV += self.CSVrow[:-1]
self.CSVrow = ''
self.CSV += '\n'
self.rowCount += 1
def start_td(self):
if not self.inTR: self.start_tr() # <TD> implies <TR>
self.CSVrow += '"'
self.inTD = 1
def end_td(self):
if self.inTD:
self.CSVrow += '",'
self.inTD = 0
def handle_data(self, data):
if self.inTD:
self.CSVrow += self.re_multiplespaces.sub(' ',data.replace('\t',' ').replace('\n','').replace('\r','').replace('"','""'))
def getCSV(self,purge=False):
''' Get output CSV.
If purge is true, getCSV() will return all remaining data,
even if <td> or <tr> are not properly closed.
(You would typically call getCSV with purge=True when you do not have
any more HTML to feed and you suspect dirty HTML (unclosed tags). '''
if purge and self.inTR: self.end_tr() # This will also end_td and append last CSV row to output CSV.
dataout = self.CSV[:]
self.CSV = ''
return dataout
if __name__ == "__main__":
try: # Put getopt in place for future usage.
opts, args = getopt.getopt(sys.argv[1:],None)
except getopt.GetoptError:
print usage(sys.argv[0]) # print help information and exit:
sys.exit(2)
if len(args) == 0:
print usage(sys.argv[0]) # print help information and exit:
sys.exit(2)
print programname
html_files = glob.glob(args[0])
for htmlfilename in html_files:
outputfilename = os.path.splitext(htmlfilename)[0]+'.csv'
parser = html2csv()
print 'Reading %s, writing %s...' % (htmlfilename, outputfilename)
try:
htmlfile = open(htmlfilename, 'rb')
csvfile = open( outputfilename, 'w+b')
data = htmlfile.read(8192)
while data:
parser.feed( data )
csvfile.write( parser.getCSV() )
sys.stdout.write('%d CSV rows written.\r' % parser.rowCount)
data = htmlfile.read(8192)
csvfile.write( parser.getCSV(True) )
csvfile.close()
htmlfile.close()
except:
print 'Error converting %s ' % htmlfilename
try: htmlfile.close()
except: pass
try: csvfile.close()
except: pass
print 'All done. '
希望本文所述对大家的Python程序设计有所帮助。
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
在主成分分析(PCA)的学习与实践中,“主成分载荷矩阵”和“成分矩阵”是两个高频出现但极易混淆的核心概念。两者均是主成分分 ...
2026-01-07在教学管理、学生成绩分析场景中,成绩分布图是直观呈现成绩分布规律的核心工具——通过图表能快速看出成绩集中区间、高分/低分 ...
2026-01-07在数据分析师的工作闭环中,数据探索与统计分析是连接原始数据与业务洞察的关键环节。CDA(Certified Data Analyst)作为具备专 ...
2026-01-07在数据处理与可视化场景中,将Python分析后的结果导出为Excel文件是高频需求。而通过设置单元格颜色,能让Excel中的数据更具层次 ...
2026-01-06在企业运营、业务监控、数据分析等场景中,指标波动是常态——无论是日营收的突然下滑、用户活跃度的骤升,还是产品故障率的异常 ...
2026-01-06在数据驱动的建模与分析场景中,“数据决定上限,特征决定下限”已成为行业共识。原始数据经过采集、清洗后,往往难以直接支撑模 ...
2026-01-06在Python文件操作场景中,批量处理文件、遍历目录树是高频需求——无论是统计某文件夹下的文件数量、筛选特定类型文件,还是批量 ...
2026-01-05在神经网络模型训练过程中,开发者最担心的问题之一,莫过于“训练误差突然增大”——前几轮还平稳下降的损失值(Loss),突然在 ...
2026-01-05在数据驱动的业务场景中,“垃圾数据进,垃圾结果出”是永恒的警示。企业收集的数据往往存在缺失、异常、重复、格式混乱等问题, ...
2026-01-05在数字化时代,用户行为数据已成为企业的核心资产之一。从用户打开APP的首次点击,到浏览页面的停留时长,再到最终的购买决策、 ...
2026-01-04在数据分析领域,数据稳定性是衡量数据质量的核心维度之一,直接决定了分析结果的可靠性与决策价值。稳定的数据能反映事物的固有 ...
2026-01-04在CDA(Certified Data Analyst)数据分析师的工作链路中,数据读取是连接原始数据与后续分析的关键桥梁。如果说数据采集是“获 ...
2026-01-04尊敬的考生: 您好! 我们诚挚通知您,CDA Level III 考试大纲将于 2025 年 12 月 31 日实施重大更新,并正式启用,2026年3月考 ...
2025-12-31“字如其人”的传统认知,让不少“手残党”在需要签名的场景中倍感尴尬——商务签约时的签名歪歪扭扭,朋友聚会的签名墙不敢落笔 ...
2025-12-31在多元统计分析的因子分析中,“得分系数”是连接原始观测指标与潜在因子的关键纽带,其核心作用是将多个相关性较高的原始指标, ...
2025-12-31对CDA(Certified Data Analyst)数据分析师而言,高质量的数据是开展后续分析、挖掘业务价值的基础,而数据采集作为数据链路的 ...
2025-12-31在中介效应分析(或路径分析)中,间接效应是衡量“自变量通过中介变量影响因变量”这一间接路径强度与方向的核心指标。不同于直 ...
2025-12-30数据透视表是数据分析中高效汇总、多维度分析数据的核心工具,能快速将杂乱数据转化为结构化的汇总报表。在实际分析场景中,我们 ...
2025-12-30在金融投资、商业运营、用户增长等数据密集型领域,量化策略凭借“数据驱动、逻辑可验证、执行标准化”的优势,成为企业提升决策 ...
2025-12-30CDA(Certified Data Analyst),是在数字经济大背景和人工智能时代趋势下,源自中国,走向世界,面向全行业的专业技能认证,旨 ...
2025-12-29