热门角色不仅是灵感来源,更是你的效率助手。通过精挑细选的角色提示词,你可以快速生成高质量内容、提升创作灵感,并找到最契合你需求的解决方案。让创作更轻松,让价值更直接!
我们根据不同用户需求,持续更新角色库,让你总能找到合适的灵感入口。
为特定主题的专利提供潜在的先前技术参考建议。
以下意见系基于专利审查的法律标准与方法论所形成之初步检索与比对建议,旨在为“固态电池热失控抑制”主题之申请提供潜在的先前技术参考范围与线索。鉴于未获悉具体权利要求文本,以下建议以技术路线、权利人群体与可验证之公开文献为导向,供后续针对性检索与法律比对之用。 一、适用审查标准与比对框架 - 新颖性与创造性:对比应聚焦于申请是否在固态二次电池体系中提出能够实质抑制或显著降低热失控风险的技术特征组合(材料、结构、机理与系统层级策略),并判断该组合相对于本领域常规技术手段是否显而易见。 - 说明书支持与可实施性:热失控抑制涉及热—电—化学耦合机理。说明书应当提供可重复的实施方式(材料组成、含量范围、界面/结构设计、测试方法与数据),并与业界通行之安全/热稳定测试标准相衔接(如DSC、ARC、热稳定温窗、极化工况等)。 - 权利要求解读要点:审查应细化至具体抑制机理(阻燃/吸热、热触发关断、抑氧释、限流/断电、热扩散管理等)、实施层级(电解质本体、界面层、电极配方、芯体结构、模组/系统)与适用固态体系(硫化物/氧化物/聚合物或复合)。 二、检索策略与分类指引(建议) - 关键词组(示例,宜交叉组合并限制为固态场景): - 固态电池/全固态电池/固体电解质/复合固体电解质/硫化物电解质/氧化物电解质/聚合物固态电解质 - 热失控/热稳定/阻燃/吸热/热触发关断/正温度系数(PTC)/相变材料(PCM)/抑氧/包覆/限流/断电/过热保护 - 具体材料:DOPO 类阻燃剂、磷酸酯、Al(OH)3、Mg(OH)2、金属氢氧化物、Al2O3/SiO2/TiO2 陶瓷填料、LiNbO3/Li3PO4/LiTaO3 包覆、LLZO、LGPS 等 - 典型CPC/IPC分类(用于初筛并非穷尽):H01M(电化学能量存储);其中涉及安全/热管理与固体电解质的细分号尤应关注;另可辅以B60L(车载动力系统安全)在系统层面的检索。实际检索应在分类表中逐条核对再行运用。 - 重点权利人(具大量固态安全/热稳定相关公开):丰田、松下、日产、三菱、村田、出光兴产、日立造船、住友电工、三星电子/SDI、LG能源解决方案、SK On、QuantumScape、Solid Power、CATL、比亚迪等。 三、潜在先前技术主题与对应公开方向(供定向检索) A. 通过固体电解质本体实现非可燃/高热稳定 - 硫化物与氧化物固体电解质本体安全性(本征不可燃、分解放热起始温度较高)之公开,及其与高镍正极、锂金属负极界面处的热稳定改性(如LiNbO3、Li3PO4、Al2O3纳米包覆)以减少高温下副反应与放热峰。 - 重点检索对象:丰田、三星、出光兴产、Ohara、住友等关于硫化物(如LGPS/LPSCl/LPS系)和石榴石LLZO体系的界面包覆、掺杂稳化与热稳定数据之专利与论文。 B. 复合固态电解质中的阻燃/吸热添加剂 - 在聚合物-无机复合固态电解质中引入磷系阻燃(如DOPO衍生物、磷酸酯)、金属氢氧化物(Al(OH)3、Mg(OH)2)等吸热填料,或陶瓷惰性填料(Al2O3、SiO2)以降低可燃性与抑制放热峰。 - 重点检索对象:LG/Samsung/SK 等在复合固态电解质配方中的阻燃添加剂专利;高校/科研机构在聚合物固态电解质中添加阻燃体系之公开。 C. 热触发关断(thermo-responsive shutdown)机制的固态化改造 - 在复合固态电解质/界面层中引入低熔/热敏相,使温升触发离子导通显著下降,形成“固态关断”功能;或在集流体/极片中嵌入PTC微粒实现限流升阻。 - 重点检索对象:隔膜关断技术向固态/复合电解质迁移的专利族,及PTC/热敏断路(CID/膨胀触发)在固态结构电芯中的实现路径。 D. 正极释氧抑制与高温界面钝化 - 高镍正极(NMC/NCA)表面包覆(LiNbO3、Li3PO4、AlPO4、Al2O3等)与阴离子掺杂以抑制高温释氧、降低与硫化物电解质的放热副反应。 - 重点检索对象:丰田、松下、日产等关于“正极表面无机包覆+硫化物电解质”组合的安全改进专利。 E. 结构/系统层面的热扩散与抑制 - 在电芯—模组层级引入相变材料(PCM)、导热界面材料(TIM)、热扩散板或隔热阻燃层,降低热扩散速率与相邻单体的热失控诱发。 - 重点检索对象:整车厂与电池系统供应商(如特斯拉、LG、CATL、日产)就固态或半固态模组的PCM布置、阻燃隔舱、导热路径设计之公开。 四、经核对之代表性非专利文献(NPL)供基础比对与背景事实确立 以下文献为可验证之公开,通常包含固态体系相对安全性、界面反应放热、短路/枝晶诱发热事件风险等讨论,可用以确立本领域普通技术常识与技术动机: 1) Janek, J.; Zeier, W. G. A solid future for battery development. Nature Energy, 2016, 1, 16141. - 说明固态电解质在安全性方面的潜在优势与关键失效机理(含界面与枝晶问题),可作为将“采用固体电解质以提升安全性”界定为本领域常识的依据之一。 2) Kamaya, N. et al. A lithium superionic conductor. Nature Materials, 2011, 10, 682–686. - 首次系统报道LGPS超离子导体,虽重在导电机制,但亦确立硫化物固体电解质的材料体系背景与非可燃属性,为材料选择的动机提供支撑。 3) Porz, L. et al. Mechanism of Lithium Metal Penetration through Inorganic Solid Electrolytes. Advanced Energy Materials, 2017, 7, 1701003. - 揭示无机固态电解质中锂穿刺/短路机理,间接关联由局部短路引发的焦耳热与热失控风险,支持采用界面强化/限流/关断策略的动机。 4) Wang, Q.; Ping, P.; Zhao, X.; Chu, G.; Sun, J.; Chen, C. Thermal runaway caused fire and explosion of lithium ion battery. Journal of Power Sources, 2012, 208, 210–224. - 尽管针对液态体系,但其热失控机理、放热序列与测试范式(ARC/DSC)为固态体系热风险评估提供基准性方法学,可用于论证试验方法与效果判定标准。 5) Janek, J.; Zeier, W. G.; Zhang, W.; etc.(可检索同作者群的后续综述与评论文章,Joule/ACS Energy Lett. 等) - 多篇后续综述延伸讨论固态界面反应、热稳定性与安全性权衡,可作为界面包覆、材料选择与结构化安全策略的动机来源。 6) Kato, Y. et al. High-power all-solid-state batteries using sulfide superionic conductors. Nature Energy, 2016, 1, 16030. - 展示硫化物固态体系在高功率条件下的性能与界面工程,通常包含对材料热/化学稳定性的讨论,为采用硫化物/界面包覆组合提供动机。 注:上述NPL不直接构成所有具体“热失控抑制手段”的逐项公开,但足以确立本领域安全问题、基本机理与改进方向之共识,从而支撑对专利权利要求中“材料—结构—机理—测试”四位一体特征组合之显而易见性分析。 五、潜在专利先前技术的检索线索(以权利人+技术主题为切入;建议据此在Espacenet/Patentscope/国知局检索) - 丰田(Toyota):围绕硫化物固体电解质之界面包覆(LiNbO3/Li3PO4/Al2O3等)、复合电解质中的树脂相与无机填料配比、以及面向车载的模组级热管理措施。重点检索“all-solid-state battery”“sulfide electrolyte”“coating”“thermal stability”“safety”的组合。 - 三星(Samsung/SAIT/SDI):聚合物-无机复合固态电解质中阻燃/吸热添加剂(磷系、金属氢氧化物)、PTC/限流功能的电极或集流体集成,以及硫化物/氧化物界面稳定化。关键词“composite solid electrolyte”“flame retardant”“shutdown”“PTC”“thermal runaway”。 - LG(LG Chem/LGES):复合固态电解质配方、阻燃添加剂、模组级PCM与隔热策略,及对正极包覆以抑制与硫化物反应的组合。关键词“DOPO”“phosphate flame retardant”“phase change material”“oxygen release suppression”。 - 住友/出光兴产/Ohara/日立造船/村田等:材料侧(硫化物/氧化物电解质)之热稳定与界面设计,特别是“高温下界面阻抗抑制+副反应热降低”的技术陈述。 - 新势力(QuantumScape、Solid Power):氧化物系(LLZO/相关复合)电解质与锂负极配伍的安全改进,结构化隔热/导热路径相关公开。 六、比对关注要点(供形成权利要求-先前技术要素映射表) - 材料要素:是否限定阻燃/吸热添加剂的具体化学类别与含量范围;无机填料类型、粒径与界面相容性;电解质基体(聚合物/硫化物/氧化物)之明确限定。 - 结构要素:是否存在热触发关断功能的实现路径(相变/熔融/玻璃化引发的离子电导降低);电极—电解质界面之包覆/缓冲层;PTC/限流/CID等断电或限流机构在固态体系下的实现。 - 机理与数据:是否提供热分析(DSC/ARC/TGA)与滥用测试(过充/短路/热箱/针刺等)数据证明热失控抑制效果;是否具备与对比样的一致测试条件、统计与重复性说明。 - 系统层级:是否将电芯级抑制手段与模组级热扩散管理(PCM/隔热/导热)协同限定;热传播抑制的量化指标(相邻单体触发阈值、传播时间延迟等)。 七、后续工作建议 - 依据申请具体独立权利要求,优先围绕第III—V部分所述主题与权利人,检索近10年内(优先近5年)之WO/US/EP/JP公开,形成逐项要素映射。 - 对于声称“固态热关断/阻燃添加”的权利要求,应重点检索“复合固态电解质+阻燃/吸热添加剂/热响应相”的专利族,核对是否已有“在温升条件下电导降低”的明确教导与实验数据。 - 若权利要求涉及“正极抑氧+硫化物电解质界面协同”的方案,应检索正极包覆材料的组合公开是否已教导“降低界面放热/提升热稳定”的效果陈述。 - 模组级方案须比对PCM/隔热层在固态电芯环境下的适配性公开与参数范围,审查是否仅为将已知液态体系安全结构直接套用于固态场景。 说明 - 鉴于未提供具体权利要求、实施例及数据,上述建议以技术主题与经核对之NPL为支撑,旨在缩小检索范围并明确比对要点。正式审查意见应在获取权利要求书与说明书后,结合精确的公开号(WO/US/EP/JP)与对比文件全文,完成要素对应与法律结论。上述NPL引文经查为公开且权威来源,可作为确立本领域普通技术常识与技术动机之依据。对于具体专利公开号,建议使用上述权利人与关键词在Espacenet/Patentscope/Google Patents进行定向检索并核对法律状态与优先权链条后再行引用,以确保准确性与可采性。
Below is a non-exhaustive but representative set of prior art references reasonably pertinent to claims directed to sparsifying Transformer self-attention (i.e., constraining, masking, selecting, or otherwise reducing pairwise attention computations to achieve sub-quadratic complexity or sparsity in the attention pattern). Each entry identifies the earliest publicly available date known with reasonable certainty and explains the material teachings relevant to novelty and obviousness analyses under 35 U.S.C. §§ 102 and 103. Where appropriate, I indicate how a person of ordinary skill in the art (POSITA) would have been motivated to combine or adapt these teachings. A. Foundational structured-sparse attention patterns (fixed, local, dilated, global) 1) Child, Gray, Radford, Sutskever, “Generating Long Sequences with Sparse Transformers,” arXiv:1904.10509 (first posted Apr. 24, 2019). - Material teaching: Introduces block-sparse self-attention with predefined local, strided/dilated, and fixed sparse patterns enabling O(n√n) or related sub-quadratic behavior. Shows that structured masks preserve long-range dependencies while avoiding full O(n2) cost. - Relevance: Anticipates or renders obvious claim elements reciting block-sparse attention, strided/dilated patterns, mask-based sparsification, or hybrid local/global patterns implemented at the head- or layer-level. Provides enabling detail on mask construction and training/inference behavior. 2) Beltagy, Peters, Cohan, “Longformer: The Long-Document Transformer,” arXiv:2004.05150 (first posted Apr. 10, 2020). - Material teaching: Sliding-window attention plus a limited set of global tokens with full connectivity; optionally dilated windows. Achieves linear-time attention with task-specific global tokens (e.g., CLS). - Relevance: Anticipates or renders obvious claims directed to windowed attention, dilated windows, and “global” token subsets with unrestricted attention while the remaining tokens are restricted to local windows. 3) Ainslie et al., “ETC: Encoding Long and Structured Inputs in Transformers,” arXiv:2004.08483 (first posted Apr. 18, 2020). - Material teaching: Global-local attention with two token types (global and long), sparse connectivity between them, and restricted long-to-long attention (e.g., within segments). Introduces structured sparse patterns for long documents. - Relevance: Anticipates claims reciting multi-partition token sets with asymmetric sparse connectivity or hierarchical/global “hub” tokens, and supports obviousness combinations with other windowed or block patterns. 4) Zaheer et al., “Big Bird: Transformers for Longer Sequences,” arXiv:2007.14062 (first posted Jul. 28, 2020). - Material teaching: Combines three sparse components—random, sliding window, and global attention—to approximate dense attention with strong theoretical guarantees (e.g., universal approximation, Turing completeness). Empirically effective on long sequences. - Relevance: Anticipates claims covering hybrid sparse masks that include any mix of windowed, random, and global edges; further supports §103 arguments that adding random or global edges to local windows would have been an obvious robustness enhancement. 5) Parmar et al., “Image Transformer,” arXiv:1802.05751 (first posted Feb. 16, 2018). - Material teaching: Local (block) attention and restricted receptive fields for 2D/1D sequences to reduce cost in visual and sequence domains. - Relevance: Early disclosure of locality-constrained attention patterns. Suggests obviousness of sliding-window or block-local attention for other modalities (e.g., text). 6) Ho, Kalchbrenner, Weissenborn, Salimans, “Axial Attention in Multidimensional Transformers,” arXiv:1912.12180 (first posted Dec. 23, 2019). - Material teaching: Factorizes full attention along axes (rows/columns), reducing complexity by decomposing global attention into multiple sparse passes. - Relevance: Anticipates claim elements that factorize attention into sparse sub-operations achieving sub-quadratic complexity; provides alternatives to block/window masks. B. Content- or data-dependent sparsification (hashing, clustering, top-k) 7) Kitaev, Kaiser, Levskaya, “Reformer: The Efficient Transformer,” arXiv:2001.04451 (first posted Jan. 13, 2020). - Material teaching: Locality-sensitive hashing (LSH) attention that routes tokens to buckets so attention is computed within buckets, yielding sub-quadratic complexity. Also introduces chunking and reversible layers. - Relevance: Anticipates claims reciting LSH-based routing or content-dependent grouping to sparsify attention. Provides enabling details for hash construction, bucket attention, and complexity. 8) Roy, Saffar, Vaswani, Grangier, “Routing Transformers,” arXiv:2003.05997 (first posted Mar. 13, 2020). - Material teaching: Online k-means-like clustering of token representations to route attention within clusters (sparse), improving efficiency and quality. - Relevance: Anticipates claims involving clustering-based routing, nearest-neighbor grouping, or dynamic partitions that bound attention neighborhood size per query. 9) Zhou et al., “Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting,” arXiv:2012.07436 (first posted Dec. 14, 2020). - Material teaching: ProbSparse self-attention that selects “dominant” queries based on score distributions (e.g., via KL divergence proxies), computing attention only for selected queries. - Relevance: Anticipates claims to score-driven or probabilistic top-k selection of attention computations, query pruning, or sparsified score matrices based on distributional criteria. C. Random or hybrid sparsity and theoretical guarantees 10) BigBird (Zaheer et al., 2020) as above. - Material teaching: Random edges + local windows + global anchors; theoretical proofs that such sparse graphs approximate dense attention properties. - Relevance: Strong reference against novelty for claims that combine random, local, and global connectivity with asserted theoretical capacity benefits. 11) Longformer (Beltagy et al., 2020) and ETC (Ainslie et al., 2020) as above. - Relevance: Together with BigBird, these references provide a comprehensive set of hybrid sparse patterns pre-dating many later applications. D. Low-rank/kernel approximations as closely related alternatives (obviousness rationales) 12) Wang et al., “Linformer: Self-Attention with Linear Complexity,” arXiv:2006.04768 (first posted Jun. 8, 2020). - Material teaching: Projects keys/values to low-rank spaces to achieve linear complexity. - Relevance: While not “sparse masks,” teaches the same problem (sub-quadratic attention) and would motivate a POSITA to consider structured sparsity as an alternative known path to the same efficiency goal. 13) Choromanski et al., “Rethinking Attention with Performers,” arXiv:2009.14794 (first posted Sep. 30, 2020). - Material teaching: Kernel-based FAVOR+ linear attention approximations. - Relevance: Supports §103 combinations showing that, by 2020, multiple families of solutions (sparse masks, hashing/clustering, low-rank, and kernelization) were well known; choosing a sparse-mask embodiment would have been one of a finite set of predictable options to reduce cost. 14) Katharopoulos et al., “Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention,” arXiv:2006.16236 (first posted Jun. 29, 2020). - Material teaching: Linear attention via kernel trick; O(n) complexity. - Relevance: Same as above—alternative efficiency pathway indicative of the state of the art and motivations to avoid dense O(n2) operations. E. Earlier block/local or structured attention and sparsity-inducing distributions 15) Shen et al., “Bi-BloSAN: Bidirectional Block Self-Attention Network for Fast Text Classification,” arXiv:1804.07094 (first posted Apr. 19, 2018). - Material teaching: Block-wise restricted self-attention to reduce complexity for sequences while preserving bidirectional context through hierarchical blocks. - Relevance: Pre-Transformer or contemporaneous structured sparse self-attention paradigm; anticipates block-based sparsity rationales and hierarchical masking. 16) Martins, Astudillo, “From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification,” arXiv:1602.02068 (first posted Feb. 6, 2016). - Material teaching: Sparsemax yields exactly sparse attention distributions (zeroing some weights) as a drop-in replacement for softmax. - Relevance: For claims reciting sparsity in the attention weight vector itself (as opposed to a hard mask on the score matrix), this anticipates using alternative normalizers to induce zeros. It also supports obviousness for combining mask-based sparsity with sparse normalizers to further reduce compute. 17) Lee et al., “Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks,” arXiv:1810.00825 (first posted Oct. 1, 2018). - Material teaching: Inducing point attention (ISAB) reduces quadratic cost by attending via a small set of learned inducing points, yielding structured sparsity of interactions. - Relevance: Anticipates use of a small, learned “global” or “inducing” token set to mediate attention—closely related to global tokens in Longformer/ETC. 18) Dai et al., “Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context,” arXiv:1901.02860 (first posted Jan. 9, 2019). - Material teaching: Segment-level recurrence and relative positions enabling long-context modeling with reduced recomputation. - Relevance: Not sparse per se, but evidences the field’s recognized need to scale Transformers to longer sequences, motivating sparse designs. 19) Xiong et al., “Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention,” arXiv:2102.03902 (first posted Feb. 7, 2021). - Material teaching: Low-rank Nyström approximation to attention. - Relevance: Alternate sub-quadratic approximation supporting obviousness rationales and combinations with sparse/global token concepts. F. Survey and state-of-the-art syntheses 20) Tay, Dehghani, Bahri, Metzler, “Efficient Transformers: A Survey,” arXiv:2009.06732 (first posted Sep. 14, 2020). - Material teaching: Systematic taxonomy of efficient attention mechanisms including structured sparsity (local, strided, global), hashing/clustering, low-rank and kernel methods, with citations to earlier works. - Relevance: Demonstrates that, by late 2020, sparse attention patterns and their design space were well known. Useful to establish motivation to combine known sparse patterns and to rebut assertions of non-obviousness premised on “unexpected” efficiency/accuracy trade-offs. G. Additional domain-specific sparse attention (contextual corroboration) 21) Huang et al., “CCNet: Criss-Cross Attention for Semantic Segmentation,” arXiv:1811.11721 (first posted Nov. 28, 2018). - Material teaching: Row-and-column attention producing sparse connectivity patterns with broad receptive field at reduced cost. - Relevance: Shows that structured sparsity in attention to enlarge receptive fields at sub-quadratic cost was known in vision; supports obviousness to apply analogous patterns in text. Legal analysis considerations - Anticipation (§102): Many of the above references expressly disclose fixed sparse masks (e.g., local/sliding windows, dilated/strided patterns), hybrid masks with global tokens, and random edges (Child 2019; Longformer 2020; ETC 2020; BigBird 2020). If an asserted claim reads on implementing attention through any predetermined subset of token pairs (with local windows and one or more global tokens) to achieve sub-quadratic complexity, Longformer and BigBird are particularly strong anticipatory art. Child (2019) is often the earliest widely cited disclosure of block-sparse Transformer attention. - Obviousness (§103): Even where a claim adds (a) random edges to local windows, (b) a small set of global tokens, or (c) dilations/strides, the combination is taught or suggested across Child (2019), Longformer (2020), ETC (2020), and BigBird (2020). Content-based routing variants are taught by Reformer (LSH) and Routing Transformers (clustering). Score-based query pruning is taught by Informer. Given the well-documented goal of reducing O(n2) complexity, a POSITA would have had strong motivation to adopt any of these sparse patterns, with a reasonable expectation of success, as corroborated by Tay et al. (2020) and the widespread open-source implementations that followed. - Enablement and written description (§112): The cited works provide sufficient implementation details for mask construction, complexity analysis, and training dynamics (e.g., handling global tokens, maintaining gradient flow across sparse patterns, batching). They are thus probative prior art for enablement and written description. Targeted search guidance - CPC/IPC classes: G06N 3/08 (artificial neural networks), G06F 17/18 (digital computing or data processing; machine learning), and cross-reference to G06F 15/16 or G06F 7/00 for algorithmic graph sparsification concepts. In practice, recent AI filings often concentrate in G06N 3/00 subclasses. - Keywords/strings: “sparse attention,” “block-sparse attention,” “sliding window attention,” “dilated attention,” “global tokens,” “random attention edges,” “LSH attention,” “clustered attention,” “routing transformer,” “probabilistic attention,” “top-k attention,” “inducing points attention,” “Nyström attention,” “linear attention.” - Likely assignees/authors: Google/DeepMind, Google Research/Brain (BigBird, Reformer, Routing Transformers, Performer, ETC, Nyströmformer); Allen Institute for AI (Longformer); Facebook/Meta (Linformer); OpenAI (Sparse Transformers); academic groups associated with the above arXiv works. Use and limitations - The precise relevance of each item will depend on the priority date and the specific claim language (e.g., whether the claim is limited to fixed masks vs. content-dependent selection; whether “sparsity” is in the mask or in the normalized attention distribution; whether particular complexity bounds, e.g., O(n), O(n log n), are recited). - If you provide the asserted claims, I can prepare a claim chart mapping these references to specific limitations to support §§102/103 rejections. In view of the above, the identified non-patent literature provides robust, enabling disclosures of Transformer attention sparsification via fixed masks, hybrid local/global/random connectivity, hashing/clustering-based routing, and score-driven query selection well before mid-2021, and would constitute strong prior art against later-filed claims to such techniques.
以下意见以可穿戴心电(ECG)监测电极为技术对象,旨在为新颖性与创造性审查提供可操作的先前技术参考线索与检索路径。本文着重于可合理比对的技术门类与代表性文献来源,并尽量指向可高置信度核查的公开资料。鉴于具体权利要求尚未提供,以下建议以技术分类与证据类型为纲,供后续有针对性的检索与对比分析使用。 一、适用的法律与审查标准(概述) - 中国法域:依据《专利法》及《专利审查指南》,重点围绕新颖性与创造性(与现有技术的区别特征、由现有技术得到该区别特征的显而易见性)进行对比分析。 - 欧洲法域:EPC第54条(新颖性)与第56条(创造性);EPO问题—解决方案法构建最接近现有技术、确定客观技术问题、评估是否显而易见。 - 美国法域:35 U.S.C. §102(新颖性)与§103(显而易见性);KSR v. Teleflex(550 U.S. 398, 2007)确立将相邻技术领域的常规组合、动机与可预见性纳入判断。 - 说明:行业标准及监管文件通常不直接限定结构性特征,但可作为技术常识或动机来源(例如对皮肤兼容性、电极性能指标的普遍要求),对创造性判断具有支撑价值。 二、先前技术的主要技术门类与典型来源 为覆盖可穿戴ECG电极的主流实现路径,建议重点梳理以下门类。各门类下先列举高置信度的非专利文献与行业标准,再提出有代表性的商业化产品/潜在专利权利人线索,便于在主流数据库(CNIPA/EPO/USPTO、IEEE、ScienceDirect、PubMed、Standards bodies)中进一步精准检索。 (一)贴片型(粘附式)Ag/AgCl或导电水凝胶电极一体化方案 - 技术要点:一次性或半一次性贴片;导电胶/水凝胶层;背衬与粘附系统(如无纺/泡棉/薄膜);引线或内置电子单元;运动伪差抑制与出汗管理。 - 行业标准与监管: - ANSI/AAMI EC12(一次性心电电极通用要求;含偏置电位、交流阻抗、噪声、除颤恢复等指标)。 - ISO 10993系列(皮肤接触材料的生物学评价,10993-1总则;10993-5细胞毒;10993-10过敏/刺激已由10993-23取代;样品制备见10993-12)。 - IEC 60601-2-47(动态/动态心电系统的专用安全与性能要求,涉及动态记录与信号质量)。 - 非专利文献(高置信度示例): - Kim D.-H. 等,“Epidermal Electronics”,Science,2011,333: 838–843。报道超薄、可顺应皮肤的贴附式电极与互连结构,用于ECG等生理信号采集,涉及应变缓释几何与皮肤贴合界面设计。 - Stoppa M., Chiolerio A.,“Wearable electronics and smart textiles: a critical review”,Sensors,2014,14(7): 11957–11992。综述含贴片与纺织电极的材料、工艺与系统集成,对常见电极材料与贴附技术有系统描述。 - 商业与潜在专利权利人线索(用于数据库深检): - iRhythm(Zio贴片,长程心电监测集成电极与记录单元)。 - VitalConnect(VitalPatch)、MC10(BioStamp类贴附式生理监测贴片)。 - 3M、Ambu(一次性Ag/AgCl凝胶电极、背衬结构、粘附体系)。 - 建议检索要点:单导/多导贴片;内置电极几何与电极-皮肤界面阻抗管理;出汗通道/微孔背衬;可剥离衬片/层级粘附;长时佩戴抗伪差方案。 (二)干式电极(导电橡胶、金属涂层/导电聚合物)与胸带/模压件 - 技术要点:无需凝胶;导电橡胶或金属/碳基涂层;通过机械预紧(胸带/弹性衣物)保证接触;抗汗与清洗耐久设计。 - 非专利文献: - Chi Y.-M., Jung T.-P., Cauwenberghs G.,“Dry-Contact and Noncontact Biopotential Electrodes: A Review”,IEEE Reviews in Biomedical Engineering,2010,3: 106–119。对干式与非接触式电极工作机理、材料与性能影响因素进行系统评述,适合作为技术常识与对比基准。 - 商业与潜在专利权利人线索: - Polar(胸带式ECG)、Zephyr(BioHarness)、Textronics/杜邦智能纺织前身团队、各大运动/医疗可穿戴厂商。 - 检索要点:导电硅胶/橡胶配方与表面微结构;纺织包覆;汗液/皮脂管理;运动伪差测试方法。 (三)纺织电极(织物一体化导电纤维/涂层) - 技术要点:导电纤维(银纤维、不锈钢纤维、碳基)或涂层(PEDOT:PSS、银浆)形成纺织电极;服装压力与贴合几何;可水洗耐久性。 - 非专利文献: - Stoppa & Chiolerio(同上)对纺织电极有专章。 - 可补充检索关键词:“textile ECG electrodes”,“knitted/woven conductive fabric ECG”,“PEDOT:PSS textile electrode ECG”,定位2005–2018间大量评测类论文。 - 商业与潜在权利人线索: - HealthWatch、Hexoskin、OMsignal等智能服装;欧盟资助项目成果(如BIOTEX等)常见公开报告。 (四)电容式(非接触)ECG电极 - 技术要点:通过介电隔离层/衣物耦合;高输入阻抗前端、护环(guard ring)、抗工频干扰;适用于座椅/床垫/服装内衬。 - 非专利文献: - Chi等(2010综述)涵盖电容式电极机理与实现。 - 建议检索“capacitive ECG electrode”,“non-contact ECG through clothing”,“guard ring electrode ECG”等关键词,2002–2015间有多篇经典实现与噪声建模论文。 - 商业与潜在权利人线索: - 汽车座椅/床垫监测厂商;大学实验室技术转移专利族。 (五)微针/微柱阵列干式电极与“纹身式”超薄电极 - 技术要点:微结构穿透/压入角质层降低接触阻抗;可拉伸基底与蛇形互连;超薄膜类“电子皮肤”。 - 非专利文献(高置信度示例): - Kim D.-H. 等(Science 2011)报道的蛇形互连与超薄皮肤贴附平台为此类设计的通用基石。 - 建议检索“microneedle ECG electrode”,“epidermal ECG tattoo electrode”,“graphene tattoo ECG”等,可定位2012年以来多个材料与工艺路线(聚合物微针、金/碳基纳米网、电极界面建模)的实证论文。 - 商业与潜在权利人线索: - 学术转化公司与材料公司(柔性电子、可拉伸导体);医疗贴片公司在运动抗伪差改良中的申请。 三、国际/行业标准与监管文件(作为技术常识与技术启示来源) - ANSI/AAMI EC12:一次性心电电极的安全与性能要求。涉及偏置电位、交流阻抗、噪声、除颤恢复、粘附与标识等。可作为界定“常规性能指标”和“达到该指标的常规手段”的依据。 - IEC 60601-2-47:动态/动态心电设备专用标准,涉及记录/分析系统要求、导联质量、抗干扰与安全。 - IEC 60601-1、IEC 60601-1-2:基础安全与电磁兼容,对贴片集成电子单元的系统性要求构成普遍动机。 - ISO 10993系列(尤其10993-1、-5、-23、-12):对皮肤接触材料(胶黏剂、水凝胶、背衬、导电涂层)的生物相容性评价路径提供合规动机,常被用作材料选择与工艺封装的技术启示。 - 美国21 CFR 870.2360(Electrocardiograph electrode):对心电电极的监管分类与通用控制要求;FDA相关指南及510(k)公开摘要可提供对典型结构与试验方法的行业共识证据。 四、检索与分类建议(便于在多法域数据库交叉定位) - IPC/CPC分类优先: - A61B 5/0402(生物电信号测量用电极,心电适用) - A61B 5/0404(干式电极) - A61B 5/0406(非接触/电容式电极) - A61B 5/042(电极材料或结构细节子组;按数据库细化) - 纺织相关可辅以A41D(服装)下与电功能集成子组的交叉检索 - 关键词组合(中英并用,以便拓宽覆盖面): - “可穿戴 心电 电极/贴片/水凝胶/干式/电容式/纺织/微针/纹身/可拉伸/超薄/蛇形 互连/抗伪差/出汗 管理/皮肤 兼容” - “wearable ECG electrode patch hydrogel Ag/AgCl dry electrode capacitive textile microneedle epidermal tattoo stretchable serpentine motion artifact sweat management biocompatible” - 目标权利人与产品名称用于反向检索专利族: - iRhythm(Zio)、VitalConnect(VitalPatch)、MC10(BioStamp)、3M(Red Dot/Ag/AgCl电极)、Ambu(BlueSensor)、Polar、Zephyr、HealthWatch、OMsignal、Hexoskin。 五、用于对比权利要求常见特征的技术要点提要(便于后续比对映射) - 电极-皮肤界面:Ag/AgCl与导电水凝胶;干式导电弹性体/金属涂层;微结构降低界面阻抗;电容式隔离介质与护环设计。 - 机械与粘附:背衬层次结构(泡棉/膜/织物)、可透气/微孔结构、层级粘附以延长佩戴时长、剥离力与再贴附性能。 - 伪差与噪声抑制:机械解耦(蛇形互连、软硬梯度过渡)、出汗导排、表面微纹理、前端高输入阻抗/驱动右腿(DRL)/护环抑制工频噪声。 - 系统集成:电极与记录/存储/蓝牙模块一体化布局;供电与封装;防水防汗等级;可清洗性(纺织方案)。 - 合规与测试:引用AAMI EC12、IEC 60601-2-47、ISO 10993作为设计目标的常规性与动机。 六、具体、可优先核查的非专利文献(高置信度) - Kim D.-H., et al., Epidermal Electronics, Science, 2011, 333: 838–843. - Chi Y.-M., Jung T.-P., Cauwenberghs G., Dry-Contact and Noncontact Biopotential Electrodes: A Review, IEEE Reviews in Biomedical Engineering, 2010, 3: 106–119. - Stoppa M., Chiolerio A., Wearable electronics and smart textiles: a critical review, Sensors, 2014, 14(7): 11957–11992. 七、后续工作建议 - 在上述分类框架下,优先以Kim 2011与Chi 2010为“技术常识”基准文献,对申请中的关键区别特征(例如:特定的微结构几何、特定粘附多层结构、汗液管理孔道、特定阻抗与噪声指标的实现路径)进行要素拆解与对照。 - 围绕特定商业产品名称与权利人开展家族检索,优先锁定同族早期公开(WO/US/EP/CN)以最大化新颖性比对可能性;将AAMI EC12、ISO 10993及IEC 60601-2-47作为“问题—解决方案”中技术动机与合理预期成功的证据来源。 - 对于声称“长时佩戴”“低伪差”“高舒适度”的功能性效果,着重检索含有定量对比的实测论文与标准化测试方法描述,以增强对技术效果是否出乎本领域预期的实质性判断。 重要说明 - 为避免提供不准确的具体专利编号,本文在专利层面仅提供高确定性的权利人/产品线索与分类检索路径。建议在CNIPA、EPO和USPTO数据库中据此进行编号级的核查与全文比对。 - 上述行业标准与综述文献可稳定用于界定本领域常识与常规技术手段,适合作为创造性评价中“技术启示”与“动机”的证据来源。对于任何数值限值或测试条件,应以标准文本最新版为准进行最终确认。
立项前快速形成先前技术清单与风险矩阵,评估申报价值与优先级,制定绕开与布局策略,并准备管理层决策汇报。
迅速定位关键对比文献并生成逐项对比说明,起草实审答复要点、无效理由雏形与补正方向,显著提升检索与写作效率。
在研发早期完成查新与差异化验证,明确可专利化特征与改进点,避免重复投入与低价值申请,稳住项目转化路径。
评估赛道专利壁垒与规避路径,指导产品路线与版本规划,完善融资材料中的知识产权部分,减少合规风险。
尽职调查中快速筛查目标核心专利风险,输出易读的先前技术摘要与时间线,辅助估值判断与交易谈判要点。
对潜在侵权点进行初筛,生成证据清单与沟通要点,提前规划授权、许可或技术替代方案,降低纠纷概率。
打造一套面向专利与研发团队的“先前技术参考建议”智能提示词,帮助用户在最短时间内: • 由AI扮演资深审查员,围绕给定主题/关键词快速产出高相关的先前技术清单与结论性判断。 • 准确识别可能影响新颖性/创造性的关键文献与证据位置,给出与权利要求要点的一致性/差异性比对。 • 输出可直接应用的检索路径(关键词、同义词、分类号、语义线索)与地域合规提示,支持多语种场景。 • 生成便于复用的结构化结论与专业表达,帮助完成立项评审、申请策略优化、OA答复思路与竞争情报速览。 • 以标准化模板提升团队协作效率,降低外包成本与漏检风险,促进试用转化与持续付费价值。
将模板生成的提示词复制粘贴到您常用的 Chat 应用(如 ChatGPT、Claude 等),即可直接对话使用,无需额外开发。适合个人快速体验和轻量使用场景。
把提示词模板转化为 API,您的程序可任意修改模板参数,通过接口直接调用,轻松实现自动化与批量处理。适合开发者集成与业务系统嵌入。
在 MCP client 中配置对应的 server 地址,让您的 AI 应用自动调用提示词模板。适合高级用户和团队协作,让提示词在不同 AI 工具间无缝衔接。
免费获取高级提示词-优惠即将到期